An unconventional idea in the 1950s, artificial Intelligence is not the future rather it is here and now. It has become an integral part of our daily lives. From doing simple tasks like auto-completing our YouTube Suggestions. Or, managing our daily schedules like Google Assistant, Amazon Alexa or Siri. Then there is machine learning, controlling robots, self-driving cars, and many more. As AI becomes omnipresent it will influence our lives in more ways than we realize. But is AI the one and all solution for our lives in the future? Is it the perfect creation of mankind?
Should artificial Intelligence be controlled? On one side we see the creation and operation of AIs tuned and tweaked to near-perfect efficiency. We also see these near-perfect AIs fail when applied to real-life applications. On another side we see AI doing wonders where no other man or man-made machine can. AIs is created to allow a machine to mimic human information processing. To understand the basics of an AI’s processing system, it is very important to analyze intelligent processing systems. Like our brain handles information.
The three most important aspects of any intelligent system are reception, interpretation, and learning. Reception is the process by which any receptors such as the eyes, ears, and skin take information from the environment. They are sent in interpret-able formats to the processing systems. In our case, it’s the brain’s electromagnetic signals. For examples of receptors in AI devices, we can look at Echo for Alexa and iPhone’s Siri.
After the reception phase, the interpretation part occurs. This is when the processing agent (i.e. the brain) sorts the data and matches it with the memory bank. It then identifies the information. After the identification, the interpreter presents the information to the users according to the current state of the system. This is why you would most likely look for a sign which says ‘Restroom’ when you need to relieve yourself.
AI usually handle the interpretation process by processing large amounts of information. By using intricate and sophisticated machine learning algorithms such as neural network, game-playing algorithms, etc. Using these methods, AI systems can identify objects or do a task exceptionally well. This enables AI systems to do remarkable things such as driving a car.
This brings us to the third aspect, the ‘Learning’ part. At the early development cycle, any intelligent system has a small database from which it can interpret and analyze from. To expand this limited amount of data, intelligent systems constantly add data to their memory banks to their library.
This is where the main problem lies when trying to expand the library of AI systems. Humans can absorb, process, and record data and information whenever and from wherever they want. But an artificial Intelligence system can only process and optimize based on the parameters they are provided with. It simply doesn’t have what it takes to experience humans experiences. That is unless, it is taught, how to. To create AI systems which can truly become parallel to human. We need to make them in such a way that they can process what an individual wants. But, also why they want it.
Up until now, scientists are yet to make the learning process for AI systems fully perfect. This means that several flaws are still present in machine learning. Some of the key flaws that remain in AI systems are implicit bias, incomplete datasets, and expectations. Understanding these things is important to have a better insight into why these occur and how to circumvent them.
AI bias, or implicit bias, displays the efficient and repeatable mistakes existing in a computer framework can make unjustifiable results. Showing qualities that give off a negative connotation. Such as being racist or having an extremist political view, or in any case, biased. Although the fact that the name recommends artificial Intelligence is to blame, it truly is about individuals. Human bias has been well researched in psychology.
It deals with the implicit association that reflects bias without consciously knowing about it. And, how it can affect an event’s outcomes. Society has started to grasp accurately how much these human biases can slither their way through AI systems. Being able to preemptively identify these threats and seeking to minimize them is of utmost priority.
What is the root cause for introducing bias in AI systems? How can it be prevented?
In numerous cases, bias sometimes infiltrates algorithms,even if sensitive variables such as gender, ethnicity, or sexual identity are excluded. AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent social prejudices.
In 2016, Microsoft released an AI-based conversational Chabot on Twitter that was supposed to interact with people through tweets and direct messages. Within a few hours of its release, it began to reply using highly offensive and racist messages. The Chabot was trained on anonymous public data and had a built-in learning feature. It led to a coordinated attack by a group of people to introduce racist bias in the system.
Some users were able to introduce the bot to misogynistic, racist, and anti-Semitic language. The incident was an eye-opener to a broader audience of the potential negative implications. Unfair algorithmic bias can be introduced in artificial Intelligence systems. Class imbalance is a leading issue in facial recognition software and for that, facial recognition systems are also under scrutiny. A datasets called “Faces in the Wild,” considered the benchmark for testing facial recognition software, had data that was 70% male and 80% white. Good enough to be used on lower-quality pictures, “in the wild” is a highly debatable topic.
On June 30, 2020, the Association for Computing Machinery (ACM) in New York City called for the termination of private and government use of facial recognition technologies. Due to “clear bias based on ethnic, racial, gender and other human characteristics.” Bias caused “profound injury to livelihoods and fundamental rights of individuals in specific demographic groups.” Due to the pervasive nature of AI, it is crucial to address the algorithmic bias issues. This will make the systems more fair and inclusive.
Data is the primary source of knowledge for Artificial Intelligence. If the data fed to the AI is fragmented or flawed, AI won’t work well. For example, consider COVID-19. John Hopkins, The COVID Tracking Project, U.S. Habitats for Disease Control (CDC), and the World Health Organization all report various numbers of cases. With such variation among the sources of information. It is hard for an AI to create significant patterns from the data. Let, alone locate those hidden insights. Also, no filter strains out something like inadequate or wrong data.
There lies a challenge in setting up that filter. Artificial Intelligence systems receive information from a lot of sources, which could be unessential, meaningless, or even an interruption. Consider when IBM had Watson read the Urban Dictionary. Afterward, it was unable to recognize when to use normal language or to use slang and curse words.
The issue got so terrible that IBM had to delete the memory of Urban Dictionary from Watson’s memory banks. Consequently, AI mentors focus on unnecessary data that could lead the AI to sit around idly. Or much more terribly, identify false patterns.
The flaw which an artificial Intelligence has to face is its human expectations. Even though we make mistakes, we sometimes forget. Machines are man-made and come with flaws. Healthcare experts, estimate one out of five patients are misdiagnosed.
Artificial Intelligence assisted diagnosis may have an error rate of one out of one hundred thousand. Most people still prefer to see only the human doctor.
Misdiagnosis rate of AI is way lower than the rate of human doctors. People expect AI to be without any flaws, and even worse, people expect the human-AI trainers to be perfect too.
On March 23, 2016, Microsoft launched TAY (Thinking about you), a Twitter bot. This artificial Intelligence was engineered to the level of language and interaction of a 19-year-old girl.
In a grand social experiment, TAY was released to the world. 16 hours after launch and around 100,000 tweets later, Microsoft had to perform an emergency shutdown. It turned out to be sexist, racist and promoted Nazism. Some individuals decided to teach TAY hate speech and aggravating language to corrupt its learning system.
Microsoft didn’t implement any parameters on TAY about inappropriate behavior. So it had no basis (or reason) to know that something like inappropriate behavior and malicious intent might exist. The grand social experiment failed, and regrettably, stands as a milestone about human society and the limitations of artificial Intelligence.
Implicit bias, poor data, and people’s expectations are key flaws for which AI systems can never be perfect. It is not the one solution to many problems as many people would like to have. AI systems are still able to do some amazing things for humans which we haven’t seen before. For example, restoring mobility to a lost limb or creating unique solutions to real-world problems using highly efficient methods.
Until we can create the perfect creation process for AI systems, we have to embrace them as they are. We always have to remember that, artificial Intelligence is perfectly imperfect just like its creators; us.
Alan Turing was a 23-year-old mathematician from Maida Vale. In 1936, he decided to sit down with pencil and paper. Using just the image of a linear tape divided evenly into squares. A list of symbols and a few basic rules. He sketched to show the step-by-step process of how a human being can carry out any calculation. He showed how we can do the simplest operation of arithmetic to the most complex nonlinear differential equation. Turing’s remarkable invention was called the Turing machine. The Turing machine solved the age-old mathematical question of what an effective calculation is. Not only did Turing show us what it means to compute a number by showing how we humans do it. He created the idea behind the basic concept of artificial intelligence.
Turing’s astonishing innovation paved the way for modern computing. He opened the door to endless possibilities that lie in the world of AI. A few decades later, it is difficult to imagine. That what started as a thought experiment in a small room at the King’s College. Now has turned into the greatest human defining force. For better or worse, artificial intelligence cannot be classified as a general-purpose technology.
Rather it is more essential. Evolving into a technology that can bring either the manifold advancement of human well being. Or, the possibilities for the emergence of significant risks which can threaten society’s future. It is up to humankind who must ultimately choose which direction AI will move. It is up to us to make sure. That the development and deployment of artificial Intelligence systems are morally and socially justified and responsible.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?