One of the biggest challenges with artificial intelligence and data science is the integrity of our data. Even if we did all the right things in our models, and our testing, data might conform to some technical standard of “cleanliness,” there might still be biases in our data as well as “common sense” challenges. With Big Data, it is difficult to get to a certain granularity of data validity without proper real-world testing. By real-world testing, I mean that when data is being used to make decisions, as consumers, as testers, as programmers, as data scientists, we look at groups of scenarios to see if the decisions made conform to a kind of “common sense” standard. This is when we discover the most important biases in our data. It is also when we discover the real impact of the decisions made by our Ai systems.
AI Impact On Systems
With the speedy proliferation of Ai technology, it’s not hard to see where the impact of Ai systems may lie, such as in countries like China which is aggressively implementing a Social Credit System based on artificial intelligence. Ai systems’ decisions will drive the citizen’s ability to travel, receive government services, take out loans, and receive an education. It will also drive the corporation’s ability to conduct business, obtain capital, and make a profit. Needless to say, in such a system, the impact is immense. In the United States, Ai systems are implemented by corporations to gain cost savings, efficiency, and competitive advantage over the competition. These types of Ai systems tend to work alongside humans such as portfolio management systems that execute automatic trading strategies, Ai-assisted surgery, and Ai-assisted medical diagnosis, etc. When Ai systems start to make judgments about a person’s quality of life independently without checks and balances such as in the case of Ai systems that monitor student’s emotions in classrooms to gauge engagement, and Ai systems that make decisions on whether someone should be incarcerated, issues such as bias in data and privacy should involve lengthy questions from our legal, political systems, our media, and society. The reason for such scrutiny is precisely because these Ai systems can easily impact someone’s life and infringe on someone’s liberties as defined by our Constitution.
Why Bias Is Important For AI?
When we discuss artificial intelligence and machine learning, we are mostly talking about the widely used Deep Learning algorithms that use Neural Networks to learn from data generated by humans. This type of algorithm learns from human-generated data. This information may be collected from real life through social media. In the process of collecting such data, there can be many biases such as data collection, cognitive, social, and algorithmic, etc. Algorithms are replicating our decision-making process by learning from the data. Inherently, this data is not objective. Historic data can contain biases, human emotions, interests, and perspectives. Hence, Ai’s decisions are subjective. An objective decision is one that is not influenced by one’s personal feelings, perspectives, interests, and biases.
Types Of Data That Impacts Ai’s Decision Process
The types of data that an Ai system learns from drives the subjectivity of the decisions made by the Ai system. For instance, if an Ai system is simply a system that learns from MRI images of the human body to look for a specific tumor, then this system is likely more objective than an Ai system that learns from social media tweets to identify trolls. With “common sense,” we can see that image data taken objectively by an MRI machine is more objective than tweets from people responding to events. The source of the data that the Ai system learns from introduces the bias.
One of the biggest biases that may be introduced from subjective information is the social context of the data. In an Ai that is used to analyze tweets, each tweet comes with it not only the author’s opinions, it also carries with it the context in which the tweet was written. For instance, the tweets that the author might have read before inking the tweet in question will alter the meaning of the tweet. Another example is the concept of dark humor. Dark humor can be perceived as negative comments on social media. Identifying humor is very subjective and based on the context of the text. If taken out of context, “dark humor” using words with negative connotations can be perceived as harassment.
Ability To Forget Is Central To Solve Ai’s Bias
In the land of biased data, we, humans have the unique ability to forget. This ability allows us to forget about past events that are anomalies in favor of new norms established. This ability allows us to forget about our perceived biases when new values are learned and internalized. This ability to forget allows us to become “better” humans. Ai is not so lucky. It does not possess this ability to forget. AI are created to learn. It will learn as much as it can. This means that the inherent biases introduced by data within the machine will stay there. Even though new information acquired can cause the system to place less importance on the data, it is still, nevertheless, there. It can still affect the outcomes of decisions made under certain conditions. When certain decisions that may place a higher emphasis on biased information, without adequate checks and balances, these types of decisions will not be reliable.
Ability To Make “fair” Judgments Is Central To Solve Ai’s Bias
If an Ai is used to decide on the mental health of an individual without checks and balances, the diagnosis can be made with biases. Mental Health diagnoses often need multiple professionals to confirm. Mental Health diagnosis is usually made with data that’s not only subjective but also can contain a myriad of social contexts. A Mental Health diagnosis can potentially impact a humnan’s employment and quality of life. It needs to be made with caution. The question becomes how do we establish fairness in Ai systems’ treatment of data? The question of “fairness” is often based on a line that is drawn based on established norms. By questioning our established norms, we can question our perceived “fairness” we use to judge the effectiveness of Ai systems’ decisions.
Successful Ai Have One Of Two Notable Features
Due to the biases inherent in today’s Ai systems, the systems that are highly effective in the market place have one of the two notable features:
- The system uses observed data that is inherently highly objective.
- The system’s decisions don’t have a critical impact on human’s lives.
Companies that are utilizing Ai based on highly subjective data are now trying to establish processes and procedures to check and balance the decisions made by the Ai systems. Researchers, on the other hand, are trying to develop more sophisticated methods for Ai to “unlearn” data, to detect “context,” and internalize norms. These combined efforts will allow us to understand individual cases of biases related to particular Ai systems usage.
How “Common Sense” Can Help?
Ai systems that are using highly “subjective” data, testing the data with real-world scenarios means injecting “common sense” into decision making. Humans have the unique ability to process data using our cognitive mechanisms, and emotional mechanisms to gain unparalleled understanding. Through the process of understanding, we are discarding unwanted information, focusing on important information, putting information into social context, and injecting needed ethical boundaries to simplify information to make “common sense” decisions. Ai is attempting to replicate the human process of decision making, but it is only able to replicate some of this process. This is when “common sense” testing of real-world scenarios will be helpful. When groups of possible biases related to outcomes are identified for review based on real-world scenarios, it is much easier to understand the places where Ai needs improvement before being able to function without human checks and balances.
For instance, in an AI system used to make judgments about whether to identify possible criminals for scrutiny, there might be “common sense” testing involving “observed” data in daily life. When a minority teenager who just turned 18 years old is identified by the system as being a possible criminal, the system looked at the fact that this teenager lives in a housing project rampant with crime, the teenager’s mother has substance abuse issues, the teenager is of African American descent from a low-income family, the teenager goes to a school with high crime rate and low graduation rate, and the teenager’s brother has a long criminal record. The only positive factor in this minority teenager’s life is that this teenager maintains all A’s at school and spent most of his life in his grandmother’s house. The grandmother taught this teenager to play the violin and enriches the teenager’s life beyond his current circumstances. But, since the negative factors in this teenager’s life outweigh the positive factors, this teenager was identified as a possible criminal. “Common sense” might suggest an additional evaluation of this teenager’s life by an objective bystander. If a bystander just randomly went up to the teenager and spent an afternoon talking to the teenager, this bystander will see that the teenager is well-mannered, has aspirations, is focused on studies, and is actively working toward a better life. In this case, the “common sense” judgment can add additional positive factors to be considered in this teenager’s case. These positive factors will help the system make a more informed decision in this teenager’s case. From the testing, we can see that the Ai system lacks other critical information that might need to be considered such as behaviors inside the school, outside the school, reality of living arrangements, and social circles. Even though our “common sense” testing added another layer of complexity to the data, it also allowed for a better decision to be made. It may not be trivial to obtain this information with Ai systems, it may be trivial to obtain this information by a human. If the person obtaining the information is “objective enough”, then the additional layer of “common sense” checks has made Ai’s ultimate decision in this case much more well-rounded and less biased. Not every case, every criteria should be evaluated by Ai in a system assisted by Ai. Certain criterias evaluated by humans do not taint the decisions by Ai, rather they help to give the Ai a more well-rounded picture.
Responsibility Of Lawmakers
When Ai system implementations can contain many biases and inject “common sense” testing is not trivial, resources needed can quickly multiply on Ai projects. The lawmaker’s job is not to put limits on the proliferation of Ai systems. The lawmakers become directors. The lawmakers can direct the trend of proliferation of Ai to safeguard an individual’s liberties as defined by our Constitution. By placing specific “regulations” to delay certain aspects of Ai systems proliferations in certain industries that use highly “subjective” data to make decisions that have big impact on people’s lives rather than outright “bans,” lawmakers will allow researchers more time to catch up on developing Ai technologies, lawmakers will also allow corporations to put real-life scenarios into place to evaluate the Ai systems thoroughly with “common sense.” In this case, to be responsible and to safeguard our liberties in the age of Ai means setting standards in industries for testing with real-life scenarios that will inject “common sense” into the decision process.
Conclusion
Ai bias is difficult to overcome. But it is a joint effort of corporations, researchers, lawmakers, media, and society. When many eyes are on the issues, we may have a lot more data, opinions about the data, and judgments passed, but with “common sense,” we come closer to equality, to human kindness, and to the protection of our constitutional liberties. That to me is an opportunity to exercise our democratic process.