AI: Artificial Intelligence, or Aggregated Ignorance?

A look at rising use of AI and the impact to user experience.
October 17, 2024 by
AI: Artificial Intelligence, or Aggregated Ignorance?
Synephore, Jonathan Hall
| No comments yet

As the rise of AI in our daily lives continues to steadily increase, so doe both fears and agitation. Whether you're an end-user impacted by the difficulties poorly implemented AI introduces to getting simple and mundane tasks accomplished, or you're a software developer being told by your senior leaders that AI should enable you to get your work done twice as fast, you've undoubtedly had concerns about what an AI driven future might look like.

Advancements in the technology have increased significantly in just the past several. AI has now demonstrated it can hold life-like conversations, create digital content including video and audio - complete with indistinguishable manipulation, write code in multiple programming languages and even generate a professional looking resume and cover letter for you.

Despite how incredibly capable it has become there are many who still remain unconvinced of its ability to fully replace human beings. Amongst those opinionated people - aside from myself - is also even AI itself.

The below quotations come directly from ChatGPT when asked about where it sees itself in todays world.

AI acknowledges its own "stupidity."

Artificial Intelligence (AI) has become a buzzword that conjures images of futuristic cities, self-driving cars, and robots that understand human emotions. However, as much as AI is celebrated for its potential, it's equally important to recognize its limitations and, at times, its sheer "stupidity." Despite its impressive capabilities, AI can be profoundly flawed, often making mistakes that seem laughably basic to humans.

ChatGPT was quick to point out that, in more simple paraphrased terms: it will never be sentient and can even be considered at times stupid.

While there's no doubt that the technology has been able to become indistinguishable from real people by some - such as the instance where it was used in a live zoom call to trigger a transfer of $25MM USD (yes, you read that right, $25 million USD) to scammers leveraging it to pretend to be the CFO of a company - this does not mean it has enough intelligence to automate decisions on its own in an unsupervised capacity. The limitations on it are quite real, yet we see companies jumping head-first in to the waters without understanding their depths.

Garbage In, Garbage Out.

AI is trained on the data we feed to it, and when it comes to data, there's more than enough [mis]information out there to keep it well-fed. The problem however lies in the fact that the data it's being fed is simply so much that nobody can truly censor or monitor what it's currently ingesting in many cases.

AI is heavily dependent on the quality of the data it processes. The "garbage in, garbage out" principle succinctly describes how poor input data leads to poor output results. For instance, an AI trained on biased data can perpetuate and even exacerbate those biases. This issue has been particularly problematic in areas like hiring algorithms, criminal justice, and loan approvals, where biased AI systems have made discriminatory decisions.

As a result, there are often times AI will insist an incorrect answer is, in fact, correct. This can cause major problems when the users of AI are either fully convinced that it's reliable or are simply too lazy to manually vet whatever data comes from it.

AI content monitoring...

AI lacks common sense – a type of knowledge that humans acquire through lived experience and cultural learning. This deficiency means that AI can make errors that a human, even a child, would easily avoid. For example, a computer vision system might identify a photo of a banana taped to a wall as a piece of modern art rather than a joke. Such mistakes underline the difference between pattern recognition and true understanding.

I recently tried to upload a video of my child swimming in the sea in Hydra, GR to YouTube! The AI content monitoring repeatedly insisted I tried to upload explicit material of a minor child. For reference, my son has roughly shoulder length hair and it appears the content monitoring identified him as a girl, thus the video of him wearing no shirt and a pair of swim trunks was erroneously blocked and removed.

Subsequently, a photograph of his grandmother sitting on that beach in a sun chair was also marked as sexually explicit material by Facebook and removed, resulting in a temporary suspension of my account from making any further posts.

In both cases I submitted a request to have the flagged content manually reviewed. Just the same, in both cases, nobody manually reviewed it and the decision was upheld.

In another example, on my own Facebook page, I tried posting a link to an article here on Synephore. That was also flagged by the AI content monitoring as me spamming and promoting my own business, despite it being on my own wall and visible only directly to friends and acquaintances.

Leveraging AI for content monitoring in a fully automated fashion creates an unnecessary burden in our daily lives and detracts from the user experience. For businesses relying on social media to build their brands, it creates a major risk of having their content removed erroneously, impacting their potential outreach and limiting their ability to survive when the world is so heavily centred around such platforms.

Even AI thinks we're relying on it too much.

Straight from ChatGPT itself, we're simply rushing in to things - and doing it rather stupidly.

Despite these limitations, there is a growing trend of overreliance on AI. In sectors ranging from healthcare to finance, there's a temptation to trust AI systems uncritically, assuming they are infallible. This overconfidence can lead to significant issues, as exemplified by cases where AI systems have made critical errors in medical diagnoses or financial forecasting.

Does anyone remember that movie Idiocracy? It sometimes feels like we're rapidly heading in that direction. Several mishaps have occurred due to larger companies rushing to leverage the technology and despite these incidents, we still don't seem to learn a lesson from them.

One of the more notable mishaps that occurred was when McDonald's decided to implement AI for order taking. This seems like a reasonable use to help expedite the process of ordering and to minimize errors on those errors. Except, that's not how it worked out. Instead, AI decided to put bacon on one customers ice cream, resulting in them pulling the plug entirely on the project.

Air Canada also felt the burn when their AI assistance gave incorrect information to a customer on reduced fares through a bereavement refund claim. The Air Line does not allow refund claims as such and the bereavement discount must be applied before purchase, but the judge ordered they reimburse the customer because their own AI gave incorrect information to the customer.

Even better worse, iTutor settled a suit for $365,000 USD, brought on by the EEOC for age discrimination in its hiring process. Their AI applicant screening systems decided that any women over 55 years of age, and any men over 60, were simply too old to bother considering. 

Let's also not forget that Zillow's use of AI resulted in an $8BB drop in market cap, costing them $304MM USD and having to cut thousands of jobs.

These examples barely scratch the surface of the problems we've already been having with AI's introduction, but somehow aren't being considered by the executives rushing to have it implemented. The biggest concerns I personally have is that those executives rushing to make these decisions are completely oblivious to the risks associated with it and have entirely incorrect assumptions of its abilities as a technology.

To be clear, I don't hate AI.

I find AI to have some uses, and like anyone else I leverage it from time to time. However, I never take it verbatim and I don't use it as a copy-paste type utility.

AI needs to be used responsibly, no differently than we use any other tool in our daily lives. Given the multiple mishaps that have occurred, there's not only a risk to damaging business reputations but the very real risk of causing actual damage and losses to other people.

I'm a strong advocate that companies who continue to pursue this direction should be held completely and financially liable for any issues that arise from their irresponsible implementations that have any form of direct negative impact on another human being, however small or large.




AI: Artificial Intelligence, or Aggregated Ignorance?
Synephore, Jonathan Hall October 17, 2024
Share this post
Tags
Archive
Sign in to leave a comment