You Don’t Know

the Power of the Dark Side (Of AI)


Since the release of ChatGPT, artificial intelligence (AI) is more accessible than ever before. However, viewing AI as an “easy win” is a misperception. The allure of AI is to be able to model the human brain and perhaps even improve upon its capabilities, resulting in intelligent machines that can perform tasks without human intervention. But there is a dark side to AI, and it’s highly likely that you haven’t heard much about it. This paper will explain some of the potential problems with AI so that potential users can be informed on how best to use these capabilities.

Like many of us, I'm sure you've been hearing about artificial intelligence in many forms, from ChatGPT's ability to craft a recipe when prompted with the ingredients in your fridge to Discord’s image generator that can spit out a realistic-looking image based on the description you feed it. Most of us are in awe of the power demonstrated via these applications, but with that power comes great responsibility.

As AI continues to permeate various aspects of life, being informed on both the positives and the negatives of leveraging these applications is critical.


In 2023, a survey found that over one-third of AI experts are concerned that the development of AI capabilities may result in a nuclear-level catastrophe. Why are they worried? There is constant improvement in large language models (LLMs) or "chatbots," and the ways they communicate with each other.

The resulting intelligence is called "artificial general intelligence," and it means that AI will be able to improve itself without human intervention. If you’re skeptical, consider that Google’s AlphaZero AI tool was able to teach itself to play chess better than any other human in existence or even any other AI chess player – by playing itself millions of times. How long did it take to gain the knowledge it needed to become the best chess player in the world? Nine hours. No, this isn't a sci-fi movie; there is a potential danger that machines could take over the world. There’s even a name for it: the control problem. If the name sounds ominous, it’s because it is: the control problem is the idea that superintelligent AI will preserve itself by anticipating all the ways humans might attempt to control it, and take action to prevent humans from turning it off.

Geoffrey Hinton, the "godfather of AI," left his position at Google this year to speak out on the dangers of AI. Hinton won the Turing Award (the tech industry’s Nobel Prize) along with two other scientists in 2019 for pioneering advancements in AI, but until recently, he thought that we were 50 or more years away from AI becoming smarter than people.

Now, Hinton thinks that AI could threaten human existence as we know it. The Co-Founders of the Center for Humane Technology said that “when we are in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that is going to end well.”

The bottom line is that AI, while extremely powerful and user-friendly, isn’t as foolproof as it might seem at first glance. Even if computers can’t outsmart us, humans can leverage this technology to exploit others in ways that we aren’t prepared to combat. As one example, machine learning carries with it the risk of data bias because machines learn by consuming massive amounts of historical data. If the data consumed carries bias, so will the decisions and outcomes selected by AI. Similarly, AI’s proficiency in writing code can be used to propagate malicious software at an exponential rate since it can write a program to do almost anything in a matter of seconds.


Image generated via Discord’s Midjourney AI

Out-of-the-box thinking still needs to be handled by human decision-makers, in part because AI lacks emotional intelligence. At the end of the day, AI is still an algorithm, and algorithms pursue the path of least resistance. In a terrifying example, AI tasked with decelerating planes as they landed recommended crushing the plane because this would stop it instantly. This is just one instance where it becomes obvious that we can’t assume AI can think as a human would. It is incapable of factoring in emotional intelligence, and, if it was trained with the wrong data, things can easily go awry.

IBM Watson, the AI division of IBM, spent $62 million to guide oncologists in making treatment decisions for cancer patients. The AI was trained using a database of cancer research, but the performance of Watson in the field included incorrect and unsafe recommendations for patients.

The root cause of the issue was that IBM chose to use representative, artificial cases to train Watson, but the sample was not broad enough to train a machine to think like a human. Depending on the task, how the machine was trained could be critically important. While it may not matter much if you’re just looking to cook up what’s sitting in your fridge, using AI to make decisions that impact lives could have unintended consequences.

Here at Vee Healthtek, we employ humans, not machines, who leverage their unique skill sets to benefit our clients. Rather than take the path of least resistance, we find and implement the optimal approach. We use AI where appropriate and can recommend ways to use it that will benefit your organization. Allow us to help you navigate the world of AI and balance the mix of solutions that’s right for your business. We’ll make sure you avoid the dark side.


Generated with “a graphic in the style of an excel spreadsheet chart showing the rise of AI compared to creation of fake social media accounts, phishing, and global warming.”


Gatzia, D. E. (2020, 09 16). Artificial Intelligence’s alarming dark side. Retrieved from idose.com: https://idose.org/artificial-intelligences-alarming-dark-side/

Hunt, T. (2023, 05 23). Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not. Retrieved from ScientificAmerican.com: https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/

Kaur, B. (2023, 05 01). NBC NEWS. Retrieved from NBCNEWS.COM: https://www.nbcnews.com/tech/tech-news/artificial-intelligence-pioneer-leaves-google-warns-technologys-future-rcna82242

Oremus, W. (2023, 04 03). The AI backlash is here. It’s focused on the wrong things. Retrieved from WashingtonPost.com: https://www.washingtonpost.com/technology/2023/04/04/musk-ai-letter-pause-robots-jobs/