Transcending Technopessimism

The following is a condensed version of "Transcending Technopessimism" by Rachel Lomasky, published at Law & Liberty.

Technopessimism is reaching a fever pitch, fueled by headlines like, "Meta’s AI internet chatbot starts spewing fake news," "Self-driving Uber Car Kills Pedestrian in Arizona," "Artificial Intelligence Has a Racial and Gender Bias Problem." Artificial Intelligence can be sexist, racist, or just profoundly stupid. The knee-jerk reaction to these sensational headlines is to call for limits and constraints on AI. But we need to pause and realize that to err is both human and AI. Substitute a human for the AI in those headlines, and they become completely mundane. AI misconduct garners great attention, but that’s because human transgressions are taken for granted, not because technology is necessarily worse. In many cases, even the most egregious of AI errors can be audited and corrected. In extreme cases, AIs can be shut down. Society generally frowns on “shutting down” humans whose behavior is stupid or insulting.

Consider, for example, new NYC legislation that requires AI to be audited for bias before making hiring decisions. Proponents argue AI can be biased against certain classes of applicants. If these biases exist in the training set, it is because human agents have previously been biased. When an algorithm is a jerk, we can fix it, e.g. by changing the training data, and we can confirm that it is fixed before deploying into the wild. It’s very difficult to determine whether human biases have been remediated, especially given how deeply rooted they can be.

Similarly, people worry about a lack of transparency behind AI’s recommendations. Indeed, the best performing algorithms often offer little clarity in decision-making. But even black-box algorithms are extremely clear when compared to the mushy black boxes inside humans. Introspection illusion, a field of psychology, explains why humans are so bad at explaining their decision-making logic. On the other hand, suites of tools explain AI results, even for nontransparent algorithms. Given a resume, the AI response is deterministic. The same is rarely true of a human, who may not even respond consistently over a single day.

transcending technopessimism

Manipulation by the social media algorithms is another popular source of apprehension. Social media platforms maximize the time that users spend on the platform by providing them content that interests them. Judgmental people argue that we should not want that content. However, these platforms are just the latest iteration of advertising manipulating us. Humans have manipulated other humans since time out of mind. Social media manipulates, but perhaps even less than forces like family, religion, government, and other media. We have mechanisms in place to try to minimize and mitigate these effects, and we should have similar remediations for AI.

Many people fear AI can’t understand ethics, regardless of whether it is actually acting ethically. A favorite meme is the Trolley Problem applied to self-driving cars, trying to decide whom to kill in an accident situation. But humans are most likely not applying utilitarianism, duty-based ethics, or any other deep thinking when they’re about to crash. On the contrary, they are thinking, “Holy crap, I’m about to hit that thing! Must swerve!” Or at the very best, “Seems like fewer people to the right.”

Sometimes the alternative to an AI isn’t a human, but nothing at all. In many cases, AI provides a service that simply couldn’t scale if provided by humans. For example, human translators couldn’t match the functionality of Google Translate. Likewise, humans cannot scan all credit card transactions looking for fraud. Even in areas of psychotherapy, where humans are clearly better equipped to provide the service, AI allows it to scale.

Often AI technologies are used both for evil and for good. Computer vision is used for surveillance, encroaching on privacy. But it is also employed in wildlife conservation, such as monitoring endangered species and preventing poaching. AI algorithms analyze images, acoustic data, and satellite imagery to identify and track animals, assess population dynamics, and detect illegal activities. We should judge the application, not the tool.

It can be difficult to put AI transgressions into perspective because people lack a deep technological understanding, and mysterious, complex systems scare people. The problem is exacerbated by sensationalized and deliberate misinformation on the part of Hollywood and the media. These factors lead many to believe the problem could be even worse than the headlines. Additionally, there is Frederic Bastiat’s “Seen vs. Unseen” distinction: A self-driving car hit a pole, but how many accidents by distracted humans would AI cars prevent? A racist AI sent the wrong person to jail, but how many errors are made by judges, some of whom have nefarious motives? Without comparing these AI misdeeds with the human alternatives, the default reaction is to hinder AI. In many cases, this is short-sighted and counterproductive. Our concerns with AI should always be viewed through the lens of comparison to human failings.

Rachel Lomasky is Chief Data Scientist at Flux, a company that help organizations do responsible AI. Throughout her career, she has helped scientists train and productionalize their machine learning algorithms.

Authored by Rachel Lomasky via RealClear Wire September 19th 2023