Recently major plays in the technology industry have been talking about why the potential applications of artificial intelligence could be something we should be worried about. Their argument comes from two different places. On the one hand, they see Ai as one of the most fundamental transformative technologies that we have ever seen in the history of mankind, and on the other hand, that transformative power is something we should be scared of and be wary about. If Ai is transformative, then it has the power to be transformative both for good reasons as well as bad.
However, fear of the unknown has always been the case with technology from the wheel to the internet. So, is Ai something we should be intimidated by? The fears of Ai seem to stem from a few common causes: general anxiety about machine intelligence, the fear of mass unemployment, concerns about super-intelligence, putting the power of Ai into the wrong people’s hands, and general concern and caution when it comes to new technology.
General Anxiety About Ai
One of the most widespread fears of Ai is just general anxiety about it and what it’s potentially capable of. A recurring theme in movies and science fiction is Ai systems that go rogue - think HAL from 2001: A Space Odyssey or the Terminator movie series. People don’t like machines that get too smart, because we fear we can’t control it. This popular representation of Ai gone bad is causing a general wariness in the public surrounding the development of intelligent systems technologies. The fear is generally surrounding the unknown that as Ai systems are becoming more intelligent and human intelligence surrounding these systems is increasing, these two unknowns don’t really give us a clear direction for where things can go.
However, just as we have examples of HAL and Terminator, we also have examples as C3PO and the computers from Star Trek. These are highly intelligent systems that are well within the control of humans. The future could be as great and benign as Star Trek if we had that perspective on the possibilities of intelligent machines. A good antidote to general anxiety is the realization that whenever human society has faced a major change or shift due to technological advances, humans have developed and adapted right along with it.
Fear of Ai: Ai is a Job Killer
Another major fear of Ai is rooted in the idea of mass unemployment of human intelligence due to their replacement by Ai workers. A big concern is that in the previous wave of automation, it was mostly blue collar jobs like manufacturing oriented jobs that were automated away, but in this new wave, it will be mostly white collar service-oriented jobs that are based around knowledge workers that will bear the brunt of intelligent forms of automation. The need for trained human workers in many areas of the economy will go away as the use of Ai grows and increasingly permeates the business world. Ai also has an effect on blue collar workers such as delivery drivers, cab drivers and many more aspects of supply chain, logistics, and manufacturing. The fact is the technology is in place that 80% of any of these jobs can be done by machines that are smart enough, so the reality already exists.
The counterargument here is that some of these systems aren’t to a point where they can reliably replace many human jobs. While Ai systems provide a lot of capabilities, they simply can’t operate in a fully autonomous mode. In fact, most successful Ai implementations are being done such that the Ai is providing an augmented intelligence role, supporting the human at what they do best, and not fully replacing them. In general, as technology waves disrupt industries and workers, they replace job categories, but don’t take away overall jobs. In fact overall jobs continue to grow and find new niches while machines simply replace the old ways of doing things. Companies aren’t completely throwing things out that have been working for them. It’s a more general transition into the world of new technologies such as Ai. As is often said, Ai isn’t a job killer, it’s a job category killer.
The fact is that a lot of industries are already being disrupted by the advancement of technology and a lot of it has nothing to do with AI. Rather, it is due to automation and streamlining processes that make it easier and quicker to go about inputting work for ourselves and not relying on businesses and other organizations to be the middleman when it comes to getting things done.
Fear of Ai: Bad People Doing Bad Things
Another common fear of Ai is that bad actors can do bad things when it comes to Ai. Leaders in Russia made a pronouncement that whoever leads the advancement of Ai is going to be one of the top rulers of the world. It is no surprise that countries are pouring significant amounts of investment and research into developing AI systems from everything to military advancement to intelligence systems that can influence the news. We can expect governments to continue to use Ai systems in ways that will make us increasingly uncomfortable in the ways they are applied to warfare, surveillance, law enforcement, and other purposes.
Yet, while we can expect governments and countries to compete with each other for Ai dominance, it’s not the governments we have to fear. After all, laws and governance are there to keep an eye on government behavior. We have more to fear from bad actors, criminals, and mischief makers using Ai technologies and bending them to their ill-conceived purposes. As Ai systems tend to learn from their creators, that can call into question the intention of the creator and those who are teaching the systems and what all they hope to accomplish. The fear is all stemming from the unknown. In addition, there hasn’t really been a strong counter argument as to what could be the best way to approach this particular scenario and what it means for our future.
Fear of Ai: The SuperIntelligence
Probably the biggest fear of Ai making media waves is that of super intelligence that Ai will reach a point where it doesn’t care for or about the existence of humanity anymore, such as what happens with Skynet in the Terminator series of movies. The technology will get to a point where it can teach itself and improve and invent on its own, and instead of becoming a force for the betterment of humanity, humanity becomes a servant of technology. The fear is that our brains will just not be able to keep up with advancement, development and invention after a certain point because things will be moving way too fast.
Machines could very well reach a point where they outstrip their human creators, and what will that mean for humanity when we reach that point? It makes us question what exactly intelligence is, and how we measure and define intelligence as concept for both humanity and machines, and how that new definition will fit into the world both now and moving forward. But all of this is assuming that systems can and will be able to achieve the goal of AGI (Artificial General Intelligence) and that we as a species or a society will not be able to put safeguards in place to keep the computers from reaching that point.
The big counterargument to all of this is that we are still much farther away from achieving AGI than we really think we are. While a lot of the technology is moving quickly to realizing goals of narrow Ai, there are parts that aren’t working particularly well. Data is still the cornerstone of Ai, and a lot of it is still messy and dirty — the Achilles Heel of Ai.
All of these fears boil down to the fact that we just don’t know where Ai is going and how soon it will take us to get there. Technology makes surprising and unusual leaps and bounds in ways we never think it will and things we think will take a while don’t. On the other hand things we thought would be here sooner aren’t there yet. It’s just a situation where we have to wait and see what comes.