Will AI Kill Us All?
Welcome to week 6 of Behind the Wall, the newsletter where I ramble about random topics and hope I can make you chuckle once or twice. If you learn something that is just a bonus, but the main goal is to prevent me from going crazy. Get ready because this week I will be rambling a lot.
I feel like I have been putting off this week’s topic for a while. I find the topic overhyped and the news oversaturated with articles covering it from every angle. I don’t know how much value I can really add by writing a newsletter on it. With that being said, I feel like I need to write about it as it is what everyone in the business world is talking about right now.
Welcome to my obligatory newsletter on artificial intelligence.
I am going to preface this newsletter by pointing out that I am not a programmer, a data scientist, or an expert in anything remotely related to AI. My understanding is surface level and I am sure that many of you who understand coding and machine learning more than me will let me know how wrong I am, but I will let you know upfront, this is my newsletter, so while you are here, I am right.
Everyone has an opinion on AI. Some people think that it is the greatest thing since sliced bread and that it is about to have an “iPhone moment” and change the world forever. Others are terrified that it will take all of our jobs and eventually go full Skynet and kill us all. I fall somewhere in the middle because as far as I can tell, all AI has accomplished so far is to generate pictures of people with extra fingers or legs.
I have no doubt that AI will revolutionize certain industries. For example, my dad is very high on the idea of AI integrated into medicine. He believes that disease detection and diagnosis will become faster and more accurate than any human could do alone. This is just one example of the amazing things that AI could do to improve society. Some other industries that would clearly benefit from AI integration are marketing, business intelligence, and scheduling. Basically any industry that uses or relies on large datasets and complicated computations would benefit immensely from AI.
This is good news for my blue collar friends. AI might help facilitate urban planning or assist in purchasing efficiency, but it will not be laying bricks or raising skyscrapers. The jobs that will be most affected by artificial intelligence are white collar jobs. Us pencil pushers who sit at a computer all day (me) have the most to worry about. Even still, I do not believe AI is coming for our jobs. This is something that people throughout history have always been scared of when new technology is introduced. Some jobs will undoubtedly become obsolete, but even more will be created by the new technology.
Of course, there are some downsides. One problem with AI is that it is only as reliable as the data it is fed. We touched on this issue in the week 3 and week 5 newsletters on lying with statistics (read those here and here). Almost every dataset is biased and the same numbers can be used to create opposing arguments. If we train an AI on biased data, the output of the AI will also be biased. A very good example of this bias was the disastrous rollout of Google’s AI image generator in the past few weeks. The image generator would refuse to create historically accurate images, singling out one particular race, no matter what prompt it was given. This was due to the programmers intentionally excluding that race of people in the training dataset. Google was forced to pull the product and apologize.
This is a good segue into one of the main arguments against AI: that it will be used as a very effective propaganda device. It has already become evident that students and other young people are going to be using AI as a learning tool (if not to just straight up cheat). Whoever controls the input data that goes into an AI will be able to control what children are taught and what are considered facts. Again, numerous examples of this can be found all over the internet, usually in the context of the AI asserting a political opinion as a fact and saying that counter arguments are immoral or objectively wrong.
Here is a prediction for you of a consequence of AI that I believe will happen in the not so distant future. I predict that eventually AI image and video generations will get so good that photographic and video evidence will become inadmissible in court. Eventually it will be impossible to tell if a video is doctored or not. This obviously has a whole host of consequences, including ruining someone's reputation to framing them for a crime.
Eventually we will not be able to trust what we see on the news (if you still trust the news) and the media will go back in time. It used to be that people didn’t know what was going on in the world. You would have to wait several weeks for news to spread and then buy a newspaper. In the future we will again not know what is going on in the world because we will not even be able to trust pictures and videos of world events.
These are just some of the negative consequences of AI, but with that being said the answer to the question in the title of this newsletter is a resounding “no”. I do not believe AI will kill us. As a matter of fact, I believe that AI has very strict limits. For example, I do not believe that a computer will ever be able to think for itself. I do not believe that a computer will ever be able to “think” at all. At least not in the way that we understand “thinking”.
Computers compute things, it's in the name. You give them inputs, they run some code, do some math, and give you the outputs. It's all about your inputs and how the program was built. If you do not build in the option to fire off nukes, it will never cause an apocalypse, no matter what inputs you feed it or what you tell it to do. It can run as many iterations as it wants, it is not able to do something that it was not designed for. As much as you might want to fly, as much as you might believe you can, it doesn’t matter how many times you jump off your roof, the outcome will be the same. You were not designed to fly.
I can already hear some of you saying “But what about when AI teaches itself to code and then rewrites itself with the ability to fire off nukes?”. Again I say to you, it cannot think. Although it might be able to learn to code, or even rewrite its own code, it cannot “decide” to do that. It would only be able to do these things if it had been built to rewrite itself only if specific conditions are triggered.
From my understanding, a computer follows a decision tree to imitate “thinking”. A decision tree is a linear and branching line of logic that leads to different outcomes depending on other inputs. The computer follows the tree and “chooses” a new path whenever there is a split based on the way it is designed and the additional inputs that it is given. In order for a computer to ever come close to the kind of thinking that humans can do, it would need a circular decision tree that approaches infinity. It would take more hours of coding than humans have been alive and more power than we have on this planet to create and operate a computer that is a representative model of the human brain.
Essentially my argument is philosophical. I do not believe a computer can do anything unless it is told to do it. I do not believe it is possible for this to be done. A computer is not a brain and cannot accurately recreate one. A computer cannot have free will or a consciousness and I have not yet seen any evidence to the contrary.
Every AI you see on the internet is a direct, input-output computer, they just have a much bigger dataset to pull outputs from than most computers. An AI chatbot takes your questions or statements (inputs) and gives a response (output) based on a predetermined, albeit ever expanding, set of responses. An AI image generator takes your prompt and returns an image that is not original, but a compilation of preexisting images. A computer cannot create. It cannot come up with something original and I do not think it ever will.
This does not mean that AI cannot be dangerous. If a computer is programmed to assess threats and to respond automatically to those threats, then you better have been very careful when you programmed it. Previously in this newsletter we have talked about the importance of definitions and being specific and this case is no different. The dataset that the AI is trained on is immensely important. What is the definition of a “threat” that you taught the AI? What levels of response are warranted for your different levels of threat? Is it possible that the AI can misinterpret human behavior as threatening when in reality it is merely posturing or joking? If all of these questions and more cannot be appropriately answered then AI with this capability should never be used. If even one of these is misinterpreted by the AI and the AI has been given the capability to harm humans (given access to weapons) then it will likely happen.
Think of the following hypothetical example:
The Department of Defense creates an AI to assess threats from foreign countries and to automatically respond to those threats. Because enemies of the United States now have missiles that move so fast that the U.S. cannot respond fast enough, such as hypersonic missiles, we give the AI the capability to fire missiles back if an attack is detected. Shortly after we deploy this AI, North Korea fires missiles in a show of force as they often do. But this time the angle of the missile being fired looks like it is destined for the United States. The AI in the U.S. doesn’t know that the North Korean missile is set to self detonate over the ocean so it automatically fires back at them. Just like that we have started a world war.
I guess the overall point that I am trying to make in this article is that I am not worried that an AI will suddenly gain self-awareness and decide that humans need to be wiped out. I believe that the biggest threat to humans is, as always, other humans. Whether through flaw in design, misuse of application, or simple human error, AI is only dangerous because of us.
If you cannot tell, I am very skeptical of AI. It may just be my cynical nature, but I believe that as of right now the downside of AI far outweighs its benefits. However, I am excited to see the amazing things that it can do, because I truly believe that certain aspects of life will get significantly better because of AI. I am optimistic that as technology continues to develop eventually the scales will tip the other way and the risk-reward trade off will make more sense to me.
If you know me (and you might have been able to pick up on this if you have been reading every week), you know that I am not a huge fan of the government and of regulation more broadly. However, this is one case where I truly believe that AI needs to be regulated as it grows. Once it is built and deployed it will be too late. The rollout of this technology needs to be slow and closely monitored. There is always a threat of a rogue developer or organization disregarding regulation, but it will be up to the technology community to self-regulate and maximize the benefit of artificial intelligence while limiting its risk to society. We are headed towards a more AI integrated future whether I like it or not, so all I can hope for is that we do it carefully.
Now that I have started writing about AI, I realize that I have so much more to say. I really want to dive into the ethics behind AI, how it will affect certain industries in more detail, and how I think it should be regulated. This seems like a great opportunity to implement my new practice of splitting up longer articles. I may even decide to start a series on AI. If you would like to hear more about AI or have specific opinions on its benefits or risks I would love to hear them, I may even give a response to some of them in future articles.
As always, thank you for reading and I look forward to sharing more with you next week.
Read more from Behind the Wall:
Week 1: The Best Thing That Ever Happened To Me: Being Laid Off
Week 2: The Case for a Recession