-
By LiveAI
By 48
The effects of ChatGPT on productivity
Play
It is safe to say that humans often tend to exaggerate, over-generalize, and soar through the roof with unrealistic expectations. The main reason for this is the fact that people have always been looking forward to a world that is just “not the same” as the same old clichés surrounding our lives most often. For that particular reason, the uprise in technology has created such a fuss. The public focus was directed at the expected revolution many decades back, causing the public to create exaggerated, unrealistic expectations of what life would be like in the presence of modern technology, new inventions, and the greatly anticipated emergence of Artificial Intelligence. In turn, this drives one to think and compare the accuracy of these expectations and the degree to which they have been fulfilled over the years.
In today’s world, there are generally 2 categories of individuals with unrealistic expectations from AI:
(1) People who have zero technical background in the AI field and are inspired by SciFi movies and believe in AI Apocalypse
(2) People with little or beginner level experience in the field of AI believing that AI can do all the heavy lifting.
Let’s breakdown the expectation of the 2 crowds further.
From Homer’s Iliad to the new Terminator movie, AI has been a recurring theme in literature and movies. These days it is popular in news reporting. But the way media presents this technology often differs from reality. New fictions very often end badly, with either the AI being destroyed or humans made insane by it or their own inventions eventually destroying them. No matter how far back you go through history, fast forward to today and you will find that the narratives are very identical. In movies, such as 2001: A Space Odyssey or The Terminator, the very issue of machine revolt against humans has been revisited quite often. Other popular AI narrative is where machines either managed to conquer or offer their last chance of survival to humans. Finally, there is a lack of portrayal of the various forms of AI that actually exist, with fiction mainly concentrating on the types of AI in which humans are able to communicate.
In spite of TV shows being fiction, they actually add to how crowds see AI innovation. Except if you work in the field, or are outrageously intrigued by the subject, you’re considerably more liable to wind up in the Sci-Fi film with a catchy name or attractive leading actor, than to find out about the most recent advancement in AI.
The Royal Academy’s report explains: Exaggerated expectations and fears about AI, together with an over-emphasis on humanoid representations, can affect public confidence and perceptions. They may contribute to the misinformed debate, with potentially significant consequences for AI research, funding, regulation, and reception.
A few years ago, a team of researchers on Facebook published an article on bots they were developing that simulated negotiations. They exchange consistent phrases mainly but sometimes make non-sensitive comments. The researchers soon discovered that bots generated phrases beyond the boundaries of spoken English that were not included in the software so that the Bots began to interact with one another in machine-English language. This was considered an important, but not generally revolutionary finding among the AI research community.
As The Guardian reports, Fast Company took up this story about a month later, which rewrote it as a title “AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?”. The Sun published an article on this which stated: “experts called the incident exciting but also incredibly scary”. Other articles such as “Facebook engineers panic, pull the plug on AI after bots develop their own language” and “Facebook’s Artificial Intelligence robots shut down after they start talking to each other in their own language” concentrate on the incorrect claim that the entire research was shut down as the bots created a peculiar, inhuman language. While in reality bots were shut down because researchers were interested in having bots who could negotiate with people, and the results were not the ones they expected.
The majority of reports on the Facebook experiment told the story around a fear-inducing angle in resemblance to fictional narratives that tell tales of the looming evils of artificially intelligent machines that are out to get us and exterminate all of mankind.
This false narrative of AI apocalypse contributes to the redirection of public attention to silly nonexistent issues like robot uprise while creating a diversion from actual issues like privacy concerns over facial recognition algorithms or the perpetuation of gender bias and discrimination as a result of machines learning from biased algorithms. Not only that, but false fears in society can also lead to an over-regulation that suffocates innovation in certain sectors, like research around the development of fairer algorithms, and lack of funding.
There is a popular belief that AI cuts jobs. While Automation has already been taking the roles of many individuals and economists predict that AI will proceed to take jobs within the years to come, Nonetheless what we aren’t addressing is the truth that for many individuals, AI will not be going to remove their work, it’ll help them of their efforts and are going to be working with AI where it offers you help in decision-making and transferring issues ahead, instead of changing you. The number of jobs in which the human decision remains very important as a result of human instinct can’t be replicated by algorithms in present AI world.
A misunderstood concept while implementing AI projects is the notion that once an algorithm is deployed, It’ll continue to work with a constant accuracy for a long time. While in reality, this isn’t correct. There are several reasons for this: changing the properties of target variables and changing patterns in the input data. This phenomenon is called Model Degradation, which implies that the model accuracy slowly drifts down over some time. With the emergence of newer data and patterns, AI models require continual monitoring with consistent improvements. Unless such activities are built-in into the maintenance model of the project, the stakeholders will see ROI declines steadily to below acceptable levels in some time.
Another misconception is that having great ML knowledge is sufficient to build great ML solutions. Certainly, data science or ML is the most sought-after skill set in the market today. A great ML practitioner who understands the basic pitfalls of an analyst’s life, such as correlation-causation or biases, and who has sufficient business understanding could bring significantly higher value to business solutions. But having good knowledge of algorithms alone is not sufficient. An equally important yet obvious fact that is often missed is the phenomenon of relevant data, both in quality and quantity. Google LLC, Facebook Inc., Netflix Inc., and Amazon.com Inc. are powerful not just because of their intelligent algorithms, but also because of the data they have about people interacting with them.
Furthermore, another misconception is that ML deployment requires huge computing power and skillsets. The key building blocks of any successful ML deployment include the availability of huge computing power, skill sets, and data. A few years ago, the procurement of these building blocks was a challenge in terms of cost, serving as a significant barrier to ML deployment. The situation is markedly different today with technology vendors coming up with ready-to-deploy ML offerings and infrastructure in the cloud. Today, organizations can leverage the computing power of the graphics processing units (GPUs) required for ML tasks through a subscription model. Through a microservices framework, one can simply call for pre-trained ML models for image classification, topic detection, or time-series change-point detection over simple Web APIs that are ready for immediate use. Lastly, there are ready-to-use ML business applications to make enterprise customer service, finance, HR, and marketing applications more intelligent.
Often expectations of what AI can achieve are too high. Often, the expectations for AI projects exceed the capabilities of the technology applied. Typically, companies have multiple objectives they want to achieve from a given technology and this typically increases implementation and deployment timelines. The key to a successful rollout is to set clear objectives and metrics, start small, measure consistently, make adjustments and then scale the solution over time. Businesses typically achieve success by picking one or two use cases to start then applying lessons learned to a larger rollout.
To summarize, ML today is undoubtedly delivering on its hype by creating innovative and valuable solutions for organizations. There are enough success stories out there to keep up the ML efforts. However, failures also abound, from which companies can learn to improve their traction.
Author: Quamer Nasim
While you are here, check out our 5-week cohort-based Applied AI Essentials course, that offers you the ability to learn state of the art application-based AI/ML skills while providing you with the opportunity to interact, engage and network with AI researchers and LiveAI Team. Find out more here.
Delve into articles written by our AI experts, offering deep insights and actionable advice
By LiveAI
February 15, 2021
By LiveAI
February 01, 2021