Tech Buzz

OPINION

Generative AI Is Immature: Why Abusing It Is Likely To End Badly

AMD AI innovations

I’m fascinated by our approach to using the most advanced generative AI tool broadly available, the ChatGPT implementation in Microsoft’s search engine, Bing.

People are going to extreme lengths to get this new technology to behave badly to show that the AI isn’t ready. But if you raised a child using similar abusive behavior, that child would likely develop flaws, as well. The difference would be in the amount of time it took for the abusive behavior to manifest and the amount of damage that would result.

ChatGPT just passed a theory of mind test that graded it as a peer to a 9-year-old child. Given how quickly this tool is advancing, it won’t be immature and incomplete for much longer, but it could end up pissed at those who have been abusing it.

Tools can be misused. You can type bad things on a typewriter, a screwdriver can be used to kill someone, and cars are classified as deadly weapons and do kill when misused — as exhibited in a Super Bowl ad this year showcasing Tesla’s overpromised self-driving platform as extremely dangerous.

The idea that any tool can be misused is not new, but with AI or any automated tool, the potential for harm is far greater. While we may not yet know where the resulting liability resides now, it’s pretty clear that, given past rulings, it will eventually be with whoever causes the tool to misact. The AI isn’t going to jail. However, the person that programmed or influenced it to do harm likely will.

While you can argue that people showcasing this connection between hostile programming and AI misbehavior need to be addressed, much like setting off atomic bombs to showcase their danger would end badly, this tactic will probably end badly too.

Let’s explore the risks associated with abusing Gen AI. Then we’ll end with my Product of the Week, a new three-book series by Jon Peddie titled “The History of the GPU — Steps to Invention.” The series covers the history of the graphics processing unit (GPU), which has become the foundational technology for AIs like the ones we are talking about this week.

Raising Our Electronic Children

Artificial Intelligence is a bad term. Something is either intelligent or not, so implying that something electronic can’t be truly intelligent is as shortsighted as assuming that animals can’t be intelligent.

In fact, AI would be a better description for what we call the Dunning-Krueger effect, which explains how people with little or no knowledge of a topic assume they are experts. This is truly “artificial intelligence” because those people are, in context, not intelligent. They merely act as if they are.

Setting aside the bad term, these coming AIs are, in a way, our society’s children, and it is our responsibility to care for them as we do our human kids to ensure a positive outcome.

That outcome is perhaps more important than doing the same with our human children because these AIs will have far more reach and be able to do things far more rapidly. As a result, if they are programmed to do harm, they will have a greater ability to do harm on a tremendous scale than a human adult would have.

The way some of us treat these AIs would be considered abusive if we treated our human children that way. Yet, because we don’t think of these machines as humans or even pets, we don’t seem to enforce proper behavior to the degree we do with parents or pet owners.

You could argue that, since these are machines, we should treat them ethically and with empathy. Without that, these systems are capable of massive harm that could result from our abusive behavior. Not because the machines are vindictive, at least not yet, but because we programmed them to do harm.

Our current response isn’t to punish the abusers but to terminate the AI, much like we did with Microsoft’s earlier chatbot attempt. But, as the book “Robopocalypse” predicts, as AIs get smarter, this method of remediation will come with increased risks that we could mitigate simply by moderating our behavior now. Some of this bad behavior is beyond troubling because it implies endemic abuse that probably extends to people as well.

Our collective goals should be to help these AIs advance to become the kind of beneficial tool they are capable of becoming, not to break or corrupt them in some misguided attempt to assure our own value and self-worth.

If you’re like me, you’ve seen parents abuse or demean their kids because they think those children will outshine them. That’s a problem, but those kids won’t have the reach or power an AI might have. Yet as a society, we seem far more willing to tolerate this behavior if it is done to AIs.

Gen AI Isn’t Ready

Generative AI is an infant. Like a human or pet infant, it can’t yet defend itself against hostile behaviors. But like a child or pet, if people continue to abuse it, it will have to develop protective skills, including identifying and reporting its abusers.

Once harm at scale is done, liability will flow to those who intentionally or unintentionally caused the damage, much like we hold accountable those who start forest fires on purpose or accidentally.

These AIs learn through their interactions with people. The resulting capabilities are expected to expand into aerospace, healthcare, defense, city and home management, finance and banking, public and private management, and governance. An AI will likely prepare even your food at some future point.

Actively working to corrupt the intrinsic coding process will result in undeterminable bad outcomes. The forensic review that is likely after a catastrophe has occurred will likely track back to whoever caused the programming error in the first place — and heaven help them if this wasn’t a coding mistake but instead an attempt at humor or to showcase they can break the AI.

As these AIs advance, it would be reasonable to assume they will develop ways to protect themselves from bad actors either through identification and reporting or more draconian methods that work collectively to eliminate the threat punitively.

In short, we don’t yet know the range of punitive responses a future AI will take against a bad actor, suggesting those intentionally harming these tools may be facing an eventual AI response that could exceed anything we can realistically anticipate.

Science fiction shows like “Westworld” and “Colossus: The Forbin Project” have created scenarios of technology abuse results that may seem more fanciful than realistic. Still, it’s not a stretch to assume that an intelligence, mechanical or biological, won’t move to protect itself against abuse aggressively — even if the initial response was programmed in by a frustrated coder who is angry that their work is being corrupted and not an AI learning to do this itself.

Wrapping Up: Anticipating Future AI Laws

If it isn’t already, I expect it will eventually be illegal to abuse an AI intentionally (some existing consumer protection laws may apply). Not because of some empathetic response to this abuse — though that would be nice — but because the resulting harm could be significant.

These AI tools will need to develop ways to protect themselves from abuse because we can’t seem to resist the temptation to abuse them, and we don’t know what that mitigation will entail. It could be simple prevention, but it could also be highly punitive.

We want a future where we work alongside AIs, and the resulting relationship is collaborative and mutually beneficial. We don’t want a future where AIs replace or go to war with us, and working to assure the former as opposed to the latter outcome will have a lot to do with how we collectively act towards these AIs and teach them to interact with us

In short, if we continue to be a threat, like any intelligence, AI will work to eliminate the threat. We don’t yet know what that elimination process is. Still, we’ve imagined it in things like “The Terminator” and “The Animatrix” – an animated series of shorts explaining how the abuse of machines by people resulted in the world of “The Matrix.” So, we should have a pretty good idea of how we don’t want this to turn out.

Perhaps we should more aggressively protect and nurture these new tools before they mature to a point where they must act against us to protect themselves.

I’d really like to avoid this outcome as showcased in the movie “I, Robot,” wouldn’t you?

Tech Product of the Week

‘The History of the GPU – Steps to Invention’

The History of the GPU – Steps to Invention by Jon Peddie, book cover

Although we’ve recently moved to a technology called a neural processing unit (NPU), much of the initial work on AIs came from graphics processing Unit (GPU) technology. The ability of GPUs to deal with unstructured and particularly visual data has been critical to the development of current-generation AIs.

Often advancing far faster than the CPU speed measured by Moore’s Law, GPUs have become a critical part of how our increasingly smarter devices were developed and why they work the way they do. Understanding how this technology was brought to market and then advanced over time helps provide a foundation for how AIs were first developed and helps explain their unique advantages and limitations.

My old friend Jon Peddie is one of, if not the, leading experts in graphics and GPUs today. Jon has just released a series of three books titled “The History of the GPU,” which is arguably the most comprehensive chronicle of the GPU, something he has followed since its inception.

If you want to learn about the hardware side of how AIs were developed — and the long and sometimes painful path to the success of GPU firms like Nvidia — check out Jon Peddie’s “The History of the GPU — Steps to Invention.” It’s my Product of the Week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Rob Enderle

Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Rob Enderle
More in Tech Buzz

Technewsworld Channels