The Dark Dangers of AI: Why Regulation Is Crucial Now
Artificial Intelligence (AI) is rapidly transforming society. What took the personal computer and internet years to achieve, AI has accomplished in days, weeks, and months. And we are only in the first inning.
The pace of AI innovation just within the past year is astounding. There’s little reason to think it will slow anytime soon. From DALL-E and ChatGPT to AlphaCode and Gato, AI has gone from narrow tasks to somewhat general intelligence almost overnight.
That pace of innovation and the associated dangers of AI are not all good for humanity, unfortunately. AI cannot exercise judgment like humans (for now, at least) and it’s only as good as the data it has consumed. For every innovation opportunity, there’s an associated risk, which in many cases may be exponentially more threatening than any corresponding benefit.
If we underestimate the dangers of AI now, we will pay for it dearly later. We need regulation today. Some industry insiders are already calling for it. Whether regulation comes in the form of self-regulation, government oversight, or some hybrid model is up for debate, all of which I’ll break down below.
One thing is certain. As we’ve witnessed with other technology and social media innovations, we cannot sit back and reasonably expect a burgeoning industry like AI to adequately police itself.
The dangers of AI
Most people hear about the wonders of AI. Some of the tools I highlighted above, particularly ChatGPT, have exploded into the mainstream. But as I said, for every benefit, there’s often a corresponding risk.
Take Chaos GPT, for example. This tool’s goal – in part to demonstrate the extreme power of AI – is “empowering GPT with Internet and Memory to Destroy Humanity.” It’s literally designed to cause chaos.
This is just the tip of the AI iceberg. Anything connected to the internet is at risk of being manipulated or hacked by AI. One could imagine – in extreme situations – bad actor AIs bringing down airplanes, causing natural disasters, or even starting pandemics and other mass extinction events.
The dangers of AI abound. There are numerous other risks and threats:
- Bias and discrimination. As mentioned, AI is only as good as what it consumes. If the data or the developers are biased or discriminatory, the AI will be too. There are no universal standards, principles, or means to reasonably ensure AIs don’t act in this manner. If left unchecked, systemic inequality will perpetuate far faster than what humans could have done alone.
- Privacy and security. Current rates of identity theft and fraud could skyrocket if bad actors utilize the power of AI for nefarious purposes. In addition, fragmented public data on you, me, and everyone we know could be used against us in targeted advertising schemes (against our knowledge). Not to mention the power governments could have from an enhanced surveillance state. The checkers need to be checked too!
- Autonomous weapons. AI could be used to develop lethal autonomous weapons like killer robots and “slaughterbots.” These weapons could cause significant harm without human intervention, which raises numerous ethical and safety concerns.
- Fake news and misinformation. Think social media proliferates too much fake news and misinformation? Imagine what a trained AI could do. Not only could it generate fake news and misinformation, but find the most efficient and effective ways to spreading that information to the maximum number of people. It could also determine the best ways to undermine credible news sources.
- Black box algorithms. Many AI systems are so complex that even developers sometimes struggle to explain them. For anyone who recently witnessed the horror show that was Congress trying to understand TikTok’s operations, imagine watching those same lawmakers trying to grasp AI. Unless lawmakers and the general public can understand what AI does in layman’s terms, it will be challenging to hold these systems fully accountable.
Can we rely on self-regulation?
Many companies in the AI space – like most of the technology world – have trust and safety teams. These teams are meant to check behavior internally and provide some degree of quality control before releasing new products into the wild. They also need to monitor how existing public products are being used.
In the case of OpenAI, the company behind DALL-E and ChatGPT, the trust and safety teams restrict certain uses for those products. I’ve been unable to have DALL-E generate certain images of Donald Trump, for example. ChatGPT will refuse to do certain (suggestive) things you ask it.
It’s great that these baseline controls are already in place. But as we saw with Facebook and many other technology companies, voluntary self-enforcement is not enough. I wish it was. The last thing we want is to stifle innovation and bog down the burgeoning AI industry with red tape. It needs to be free to move fast and break things to a degree. Otherwise, it will leave the door open to another country that cares less about regulation taking the lead.
Relying solely on self-regulation, however, is not realistic with a company’s incentive structure. Yes, nobody wants Cambridge Analytica scandals, but sometimes companies weigh the risks and determine it’s more beneficial to crack a few eggs if it means making the best omelet in town. This could lead to a zero-sum game if others decide to take similar risks.
There’s no guarantee all companies agree on the same self-regulatory model either. Had regulators organized themselves in the early days of the internet and regularly met at a global level, maybe they could have devised principles that helped at least mitigate some of the data, privacy, misinformation, and other risks we’re now struggling to combat. They should not miss their chance with artificial intelligence.
The window may already be closing.
Mitigating the dangers of AI through a collaborative approach
The AI industry should take the lead. They should be given leeway to innovate and shock the world with the potential of AI technology. But at the same time, they should be encouraged to develop and implement principles, guidelines, and standards for AI development and use. A global consortium of regulators should regularly review those operating principles and provide feedback.
Global regulators also need to develop and implement their own minimum standards that ideally are consistent across jurisdictions. The internet is everywhere after all! I know I’m imagining a rosy world of idealism, but it’s important to try. Without clear and consistent minimum standards, it will be next to impossible for AI companies and startups to understand where the lines are in their efforts to innovate.
A key part of this collaborative approach is opening the black box of AI. Regulators and the general public should have a general awareness of AI that operates in their jurisdictions or daily lives. Appropriate disclosure and transparency are crucial. There should be clear penalties and legal liability for failing to meet minimum standards for disclosure. For AI to be fully effective, people need to have the opportunity to learn what they’re getting themselves into.
AI, like many tools, can be used in different ways
I’m very bullish on AI. I predicted an AI art renaissance by year-end. I also think AI will eventually empower humans in almost every industry, making all of us more creative, effective, and efficient. The innovations we’ve already seen in the past year are only a glimpse of what’s to come.
The dangers of AI, however, should not be ignored. The scale and profound effects these tools can have are like nothing humans have seen. As with many tools, they’re only as safe and morally enriched as their developers or users. If given the opportunity and power to sow extreme chaos and havoc for personal gain, we’ve seen time and again throughout history that many bad actors will jump at the chance.
We have a unique opportunity at this early stage to get AI innovation and regulation right. By taking a balanced approach that gives flexibility to the industry while setting consistent minimum standards (that are subject to change as the industry evolves), we would at least be making an effort to learn the lessons of technology history.
The dangers of AI are real. But so are the incredible potential benefits to human civilization. We just need to find the ideal balance between the two.
0 Comments