Artificial Intelligence is All Hype. Yes, and?
Guess what? If we gave it enough time, AI might finally live up to our expectations.
If you are at all attuned to the goings-on of the technological world or are even remotely active on LinkedIn, you might have come across the strawberry phenomenon. At least, that is what I have decided to call it.
For those of you who might not be up-to-date, it has recently come to light that adroit artificial intelligence-driven large language models such as Claude and ChatGPT are failing to identify the number of times the letter ‘r’ appears in the word strawberry.
It has confounded the internet, stumping many artificial intelligence enthusiasts. The critics, however, are having a field day with this finding, as it boosts their confirmation bias in justifying that AI is not as great as it seems.
And it is, indeed, the truth. All of us are hyping AI way too much, focusing on all the great feats it is capable of achieving. However, while the entirety of LinkedIn was tearing into AI and condemning it for all that it lacks, I wondered: why are we using AI to spell out strawberry anyway? I doubt AI was created just to spell obscure fruit names.
Of course, it is great to know where any system or innovation can be improved, but did we miss the point of AI’s existence altogether while trying to poke fun at it and console ourselves that AI, after all, is not yet capable of taking over our jobs or planning world domination?
The whole spectacle has been rather underwhelming. I think we can all agree that no recruiter is looking to hire spelling bee champions, so passing congratulatory remarks on how AI is certainly not a threat to anyone’s career is quite redundant.
This was just a singular example, however. There have been many other instances where artificial intelligence has failed to meet expectations, and we have taken out our rulers like stringent parents ready to deliver a beating, or in this case, grant a public mocking.
All through this, I just wanted to ask: How old is ChatGPT? According to Wikipedia (not the most reliable source, I get it but we might be safe here), its initial release was in November of 2022, and my incredible math skills tell me we are a month short of ChatGPT’s second birthday.
Are we really expecting a generative AI model that is not yet two years old to be perfectly capable of delivering results that exceed expectations in every possible way? I am being completely subjective as I ask this but are we averse to the idea of AI’s shortcomings?
Algorithmic Aversion
Let’s see what research suggests. According to a study published by the University of Pennsylvania in the Journal of Experimental Psychology, we consistently prefer human judgment over algorithmic judgment, even though previous research has demonstrated that algorithms regularly surpass human performance. Moreover, algorithms are better predictors of success and typically outperform humans on forecasting tasks such as diagnosing illnesses.
To clarify, here “algorithm” refers to any mechanical or statistical process that predicts outcomes based on carefully analyzing the available information or evidence.
Algorithms and people who rely on algorithmic judgment also receive more criticism and have to be infallible for us to trust them. Anything less than (instant) perfection is unacceptable. Academia has termed this attitude algorithmic aversion.
Research as early as 1979 has demonstrated that a desire for flawless predictions is one of the factors that promote algorithmic aversion. Another study revealed how preference for algorithmic forecasting decreased significantly when the algorithm committed an error, even though it was still more accurate than human predictions.
On the other hand, when a human committed an error, the participants still chose to trust them for the next set of predictions, even though the human error resulted in larger losses for them.
So yes, I feel quite confident when I say that even a small mistake on the part of AI is unacceptable to us.
All these findings and the fact that algorithmic aversion exists made me wonder why people behave in a manner that goes against well-established evidence. So I decided to take a deep dive into this topic.
One way to circumvent this problem is to have the algorithm explain its rationale behind its decisions, which will allow it to gain the trust of its users. What’s more, however, is that an explanation alone does not suffice; the explanation must be understood by the users for it to become trustworthy.
Allowing users to change the information they input to see how it alters the algorithm’s response, thereby facilitating a trial-and-error process, reinforces the initially established trust that may otherwise fade into nothing. Further, if people have the choice of making changes in the algorithm itself, even if that power is limited, it helps build trust by satisfying the users’ desire for control.
The language of the algorithm matters. When the algorithm uses personalized conversation and kind words, it becomes more trustworthy. Rendering illustrations also helps.
When human and algorithmic decision-makers work together, it also helps reduce algorithmic aversion, as people believe that algorithms don’t account for qualitative data and specific contexts and instead provide more general solutions that do not fit their particular situation.
These examples are just a few reasons and solutions to the problem of algorithmic aversion. If you want to read more about it, check out the hyperlinks.
The Innovative Prowess of AI
But for now, let’s talk about a more important topic: what AI can do instead of what it cannot.
While we all know that you can use ChatGPT to write emails and summarize PDFs or Midjourney for image generation (another pretty interesting process, by the way), there are some lesser-known uses of AI that have been nothing less than breakthroughs. The added advantage is that these applications are far more interesting than discussing how AI cannot count the number of ‘r’s in strawberry.
First, let’s talk about how AI has completed two decades' worth of research work in a measly duration of 80 hours.
Yes, you read that right.
If you are an AI enthusiast, you might already know this, but a collaborative effort between Microsoft and the Pacific Northwest National Laboratory has resulted in the discovery of 18 new materials that can be used to produce batteries. While this result might not seem impressive at first glance, it may intrigue you to learn that these 18 choices were made after a screening of 32 million potential inorganic substances.
Eyebrow-raising?
Yes.
This discovery is anticipated to transform the battery industry, which is, at the moment, heavily dependent on lithium, with about 70% of all batteries manufactured being lithium-ion.
Artificial intelligence has also been instrumental in the development of drugs that cannot be created using naturally existing molecules. Having a potentially huge impact on the biotechnology space, AI has been found to be capable of generating drug molecules from scratch after predicting the 3D structure of target proteins that are responsible for specific diseases. These new drugs then attach to these target proteins and stop them from causing harm.
A third area I want to discuss in this article is the use of AI for diagnostic purposes. AI-driven algorithms are considerably more accurate at diagnosing fatal illnesses such as cancer and cardiovascular conditions. For example, AI is already significantly capable of identifying breast cancer by studying mammograms, reaching an accuracy of 94%.
While there are many more breakthrough applications of AI, the number of which is only increasing as time passes, for the next three lesser-known applications, I want to focus on the workplace, as is the purpose of this newsletter.
AI for People at Work
The first workplace-related AI application worth noting is the use of AI-powered wearables for employee safety. According to a very recently published book by Springer, we now have wearables that use AI to constantly monitor workers' health and the environment, and any changes worthy of concern are observed and the user is immediately notified.
I bet you can imagine the literally life-altering impact such technology could potentially have on employees, especially those required to work in hazardous conditions. The information on AI-driven technology and its applications in workplace health and safety is already so vast that it requires one (or many) articles to do it justice. But that is an endeavor for the future.
The second application worth noting (and one that is also close to my heart) is in the domain of people analytics. AI is quite capable of combining learnings from different disciplines, as it does not specialize in a single field. Instead, the use of AI creates a Medici effect where advances in multiple domains can be combined to come up with novel solutions to problems.
If we look at predictive analytics in the people domain, AI can combine data generated within the organization and make sense of it using the latest developments in statistics, data science, and organizational behavior.
Of course, the applications of such usage are vast, with executives using it to predict hardware failure, customer satisfaction, and supply chain issues, to name a few. But we are more interested in using AI and its knowledge base to predict the effectiveness of diversity initiatives, turnover intentions, employee engagement rates, and more—the applications are too many to list here.
Lastly, let’s talk about using AI as a tool to avoid procrastination.
Confused?
Allow me to explain
I was conducting a primary research study earlier this year on how designers make use of AI, and I found that AI, especially generative AI, works incredibly well as a brainstorming assistant. Now, you never have to start from scratch on any project you undertake.
Ask your AI assistant how it would begin and work on the project. Question it as much as you like until you feel ready to begin and have an outline of what you need to do. Of course, it can never really be the final draft. Why? This is something I’ll discuss in another article.
The use cases of AI are pretty much unlimited at this point, and they will only increase from here on. If you're interested, you could research and read about them on your own, or you could hold tight for me to deliver the information right to your inbox.
What are your thoughts about the perceptions surrounding AI? Are we obsessing too much with the extremes: "AI will take my job" vs "AI cannot spell strawberry without erring"? Let me know your thoughts in the comments!
Until next time,
Enjoyed the read? Consider subscribing!