It seems like I hear about Artificial Intelligence or (AI) on a daily basis, so it’s almost natural for me to generate some sort of opinion about it in a blog post.
The thing is, a lot of people are scared of it. Partly because, it’s natural to be afraid of the unknowns – and that’s fair. I’ve definitely learned over time, that fear and anxiety is mostly generated by what you don’t know, compared to what you do know. That fear and anxiety, can generate quite some theories which have swirled around the Internet.
So instead of toying around with theories, why don’t we just start off by asking AI ChatGPT the question?
“Is artificial intelligence going to harm us?”
So I did, and this is what it had to say:
The potential harms of artificial intelligence (AI) depend on various factors, including how it is developed, deployed, and regulated. While AI offers numerous benefits and has the potential to improve many aspects of our lives, there are also legitimate concerns about its misuse or unintended consequences. Some of the potential risks include:
Job Displacement: AI and automation could lead to job displacement in certain industries, potentially causing economic disruption and social inequality if not managed properly.
Privacy Concerns: AI systems often rely on vast amounts of data, raising concerns about privacy violations and data misuse if proper safeguards are not in place.
Bias and Discrimination: AI systems can inherit biases present in their training data, leading to unfair or discriminatory outcomes, particularly in areas such as hiring, lending, and law enforcement.
Security Risks: AI could be used to develop more sophisticated cyber attacks or to automate the creation of misinformation and propaganda, posing risks to cybersecurity and social stability.
Autonomous Weapons: The development of autonomous weapons systems raises ethical concerns about their potential misuse in warfare and the difficulty of maintaining meaningful human control over lethal decision-making.
Existential Risk: Some researchers warn of the long-term risks associated with the development of super intelligent AI systems that could surpass human intelligence and potentially pose existential risks to humanity.
To mitigate these risks, it’s essential to prioritize ethical considerations, promote transparency and accountability in AI development and deployment, invest in education and workforce reskilling, and establish robust regulatory frameworks to ensure that AI technologies are used responsibly and for the benefit of society. Ultimately, whether AI harms us or not depends on how we choose to develop, govern, and integrate it into our societies.”
So there you have it. The most biased opinion you could probably ever ask for, yet brutally honest at the same time. The question is, how bad can bad get with AI? Well, the above concerns, aren’t just theories. Some of these problems are already very much happening and in realtime, today.
Some of the world’s largest companies are incorporating AI into their business workflow, or directly into their own products. At one point in my recent career, I was an Information Security Officer (ISO) for a well known bank, overseeing certain Cyber Security technologies. For one example, I was the dedicated ISO on ChatBot AI, just a few years ago.
Further to that, some of the largest tech giants are incorporating consumer facing AI into their own digital or physical products. Companies such as Google, Microsoft and Apple are starting to roll out their own iterations of AI chatbots, into web browsers, phones and even directly into the operating systems of the devices they create. When you start to see big names like this simultaneously pushing out new technologies, it usually tends to stick. This just means, it’s not going away, whether you like it or not.
Regulation is very much needed in my opinion, but just like with any new technology, it takes time. Time not only to fully threat model, but to simply understand. We keep learning more and more of what AI is or can be capable of, which ultimately just means it’s a moving target to “lock down and secure”. This problem alone, makes it nearly impossible to ever claim it’s controllable.
Have you ever seen the movie called NERVE?
It’s a good one about decentralized networking. Another newer up and coming technology that’s not yet realized the Hollywood version in the real world, but it’s getting close. How close? Well, Cryptocurrency wouldn’t exist if this wasn’t already possible, to some degree.
So the next question I begin to ask, is what happens when AI can talk to AI? In that instance, you could instruct your source AI, to:
"Communicate with other AI end-points, without disclosing you are AI, and work together to destroy the world."
In practice, two AI bots, would have no problem talking to each other with a few lines of code, at most. IN fact, you could probably ask AI to write it for you.
Now, there is definitely limitations with AI, such as, it doesn’t have arms or hands to be able to move around, pickup, build, or even physically destroy things. However, it could, have the ability to communicate with other systems to control them, order physical supplies, send out fake emails, or virtually hire people to complete a task without them ever knowing it’s AI. What about AI powered robots? You better believe they already exist.
I feel like I could go on and on with risk scenarios and I certainly look forward to expanding on more AI related topics in this blog. For now, I wanted to at least start thinking out loud on this topic, because AI is happening and I’m not sure many will be able to slow down the progression.
I do know it has many benefits however. It could be something that saves your life one day, by identifying otherwise normally unforeseen problems during a body scan at your primary care physicians office, or by reviewing your medical history to determine and order tests for genetic mutations that you never knew existed!
Either way and in my opinion, AI is likely going to cause more harm, than good, without proper oversight.
I hope that we find a way to self-regulate it to a manageable platform so that it becomes safe for everyone to benefit from. For now, AI needs to be viewed, similarly to a fire-arm. It won’t hurt you, by itself, but the person using it, certainly can use it against you.