What started as an unconventional idea in the 1950s, AI is not a concept of the future rather it is here and now. It has become an integral part of our daily lives. From doing simple tasks like auto-completing our YouTube Suggestions to managing our daily schedules like Google Assistant, Amazon Alexa or Siri, machine learning, controlling robots, self-driving cars, and many more. But the possibilities for AI are much more than these as it becomes omnipresent and influences our lives in more ways than we realize. But is AI the one and all solution for our lives in the future? Is it the perfect creation of mankind?
The questions can’t be answered in simple positive or negative rather it gives birth to a long series of discussions and debates on whether AI should be set free to grow or should some leashes and failsafe be placed to control it. On one side we see the creation and operation of AIs tuned and tweaked to near-perfect efficiency, we also see these near-perfect AIs fail when applied to real-life applications. On another side we see AI doing wonders where no other man or manmade machine can. AIs are created to allow a machine to mimic human information processing. To understand the basics of an AI’s processing system, it is very important to analyze and try to figure out how an intelligent processing system like our brain handles the information.
The three most important aspects of any intelligent system are reception, interpretation, and learning. As the name implies, Reception is the process by which any receptors of the human body such as the eyes, ears, and skin take information from the environment and send them in interpretable formats to the processing systems. In our case, it’s the brain’s electromagnetic signals. For examples of receptors in AI devices, we can look at Echo for Alexa and iPhone’s Siri.
After the reception phase, the interpretation part occurs. This is when the processing agent (i.e. the brain) sorts the data and matches it with the memory bank and identifies the information. After the identification, the interpreter presents the information to the users according to the current state of the system. This is why for example; you would most likely look for a sign which says ‘Restroom’ when you need to relieve yourself.
AI usually handle the interpretation process by processing large amounts of information using intricate and sophisticated machine learning algorithms such as neural network, game-playing algorithms, etc. Using these methods, AI systems can identify objects or do a task exceptionally well. This enables AI systems to do remarkable things such as driving a car.
This brings us to the third aspect, the ‘Learning’ part. At the early development cycle, any intelligent system has a small database from which it can interpret and analyze from. To expand this limited amount of data, intelligent systems constantly add data to their memory banks to their library. This is where the main problem lies when trying to expand the library of AI systems. Humans can absorb, process, and record data and information whenever and from wherever they want. But an AI system can only process and optimize based on the parameters they are provided with. It simply doesn’t have what it takes to experience how humans experience unless it is taught how to. This means, to create AI systems which can truly become parallel to human, we need to make them in such a way that not only they can process what an individual wants but also why they want it.
Up until now, scientists are yet to make the learning process for AI systems fully perfect. This means that several flaws are still present in machine learning. Some of the key flaws that remain in AI systems are implicit bias, incomplete dataset, and expectations. Understanding these things is important to have a better insight into why these occur and how to circumvent them.
AI bias, or implicit bias, displays the efficient and repeatable mistakes existing in a computer framework can make unjustifiable results, for example showing qualities that give off an impression negative connotation such as being racist or having an extremist political view, or in any case, biased. Although the fact that the name recommends AI’s to blame, it truly is about individuals. Human bias is an issue that has been well researched in psychology for years. It deals with the implicit association that reflects bias without consciously knowing about it and how it can affect an event’s outcomes. In the past few years, society has started to grasp accurately how much these human biases can slither their way through AI systems. Being able to preemptively identify these threats and seeking to minimize them is of utmost priority.
When dealing with such biases, the important question to ask is: What is the root cause for introducing bias in AI systems, and how can it be prevented? In numerous cases, bias sometimes infiltrates algorithms. Even if sensitive variables such as gender, ethnicity, or sexual identity are excluded, AI systems learn to make decisions based on training data, which may contain skewed human decisions or represent social prejudices.
For example, in 2016, Microsoft released an AI-based conversational Chabot on Twitter that was supposed to interact with people through tweets and direct messages. Within a few hours of its release, it began to reply using highly offensive and racist messages. The Chabot was trained on anonymous public data and had a built-in learning feature, which led to a coordinated attack by a group of people to introduce racist bias in the system. Some users were able to introduce the bot to misogynistic, racist, and anti-Semitic language. This incident was an eye-opener to a broader audience of the potential negative implications of unfair algorithmic bias that can be implemented in AI systems. Class imbalance is a leading issue in facial recognition software and for that, facial recognition systems are also under scrutiny. A dataset called “Faces in the Wild,” considered the benchmark for testing facial recognition software, had data that was 70% male and 80% white. Although it might be good enough to be used on lower-quality pictures, “in the wild” is a highly debatable topic.
On June 30, 2020, the Association for Computing Machinery (ACM) in New York City called for the termination of private and government use of facial recognition technologies due to “clear bias based on ethnic, racial, gender and other human characteristics.” The ACM also reported that the bias caused “profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups.” Due to the pervasive nature of AI, it is crucial to address the algorithmic bias issues to make the systems more fair and inclusive.
Data is the primary source of knowledge for Artificial Intelligence. The machine trains through ground truth and goes through a lot of big data to get acquainted with the examples and connections within the information. If the data fed to the AI is fragmented or flawed, AI can’t learn well. For example, consider COVID-19. John Hopkins, The COVID Tracking Project, U.S. Habitats for Disease Control (CDC), and the World Health Organization all report various numbers of cases. With variation among the sources of information, it is hard for an AI to create significant patterns from the data, let alone locate those hidden insights. Also, no filter strains out something like inadequate or wrong data. There lies a challenge in setting up that filter. Ai systems can get a lot of information from a lot of sources, which could be unessential, meaningless, or even an interruption. Consider when IBM had Watson read the Urban Dictionary, and afterward, it was unable to recognize when to use normal language or to use slang and curse words. The issue got so terrible that IBM had to delete the memory of Urban Dictionary from Watson’s memory banks. Consequently, AI mentors sometimes focus on unnecessary data that could lead the AI to sit around idly, or much more terrible, identify false patterns.
The flaw which an AI has to face is its human expectations. Even though we make mistakes, we sometimes forget that at the end of the day machines are man-made and they can come with flaws. In healthcare, experts have estimated that potentially one out of five patients are misdiagnosed. Given this data as well as a scenario where an AI-assisted diagnosis may have an error rate of one out of one hundred thousand, most people still prefer to see only the human doctor. One of the reasons is that the misdiagnosis rate of AI is way lower than the rate of human doctors, people expect AI to be without any flaws, and even worse, people expect the human-AI trainers to be perfect too.
On March 23, 2016, Microsoft launched TAY (Thinking about you), a Twitter bot. The AI had been trained to the level of language and interaction of a 19-year-old, American girl. In a grand social experiment, TAY was released to the world. 16 hours after launch and around 100,000 tweets later, Microsoft had to perform an emergency shutdown because it had turned sexist, racist and promoted Nazism. In a sad turn of events, some individuals decided to teach TAY hate speech and aggravating language to corrupt its learning system. Microsoft didn’t implement any parameters on TAY about inappropriate behavior so it had no basis (or reason) to know that something like inappropriate behavior and malicious intent might exist. The grand social experiment failed, and regrettably, stands as a milestone about human society and the limitations of AI.
Implicit bias, poor data, and people’s expectations are key flaws for which AI systems can never be perfect. It is not the one solution to many problems as many people would like to have. AI systems are still able to do some amazing things for humans which we haven’t seen before. For example, restoring mobility to a lost limb or creating unique solutions to real-world problems using highly efficient methods. People should not throw away the idea of AI growing and the value we can get just because we don’t know whether they will become perfect or not. Until we can create the perfect creation process for AI systems, we have to embrace them as they are. We always have to remember that, AI is perfectly imperfect just like its creators; us.
Alan Turing was a 23-year-old mathematician from Maida Vale. In 1936, he decided to sit down with pencil and paper, and using just the image of a linear tape divided evenly into squares, a list of symbols, and a few basic rules, he sketched to show the step-by-step process of how a human being can carry out any calculation. He showed how we can do the simplest operation of arithmetic to the most complex nonlinear differential equation. Turing’s remarkable invention is now known as the Turing machine. The Turing machine solved the age-old mathematical question of what an effective calculation is. Not only did Turing show us what it means to compute a number by showing how we humans do it. He created the idea behind the basic concept of artificial intelligence. Turing’s astonishing innovation paved the way for modern computing and opened the door to endless possibilities that lie in the world of AI. Just after a few decades later, it is difficult to imagine that what started as a thought experiment in a small room at the King’s College, now has turned into the greatest human defining force. For better or worse, artificial intelligence cannot be classified as a general-purpose technology. Rather it is more essential, evolving into a technology that can bring either the manifold advancement of human wellbeing or the possibilities for the emergence of significant risks which can threaten society’s future. It is up to humankind who must ultimately choose which direction AI will move. It is up to us to make sure that the development and deployment of AI systems are morally and socially justified and responsible.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?