top of page
  • Dr. Mike Brooks

AI Regulation: Safeguarding Our Future Together

Personal Perspective: AI poses an existential threat. Here are actionable steps.



KEY POINTS

  • Prominent voices warn of AI threats; others should listen.

  • AI, like any powerful tool, must have guidelines to protect and benefit humanity.

  • To regulate AI, we need global unity and an international, representative body.


This is my 6th post in many ongoing series about AI that began with How AI Will Change Our Lives. AI is not merely a disruptive technology. It is a civilization-altering technology. How shall we navigate these uncharted waters skillfully?


How worried should we be about AIs that are rapidly evolving in power and proliferating? While humanity is not doomed, many prominent figures have expressed concerns that AIs pose an existential threat. These figures include Elon Musk, Bill GatesNick Bostrom, and Stephen Hawking. Musk equated the creation of AI as "summoning the demon." Musk, along with Apple co-founder, Steve Wozniak, published an open letter asking for at least a 6-month pause in the training of AI systems more powerful than ChatGPT 4.0 to ensure better safety and control. "The Godfather of AI," Geoffrey Hinton, recently quit Google to warn of the dangers of AIs. Hinton's fellow AI pioneer, Yoshua Bengio, is also imploring governments quickly regulate AI. Going a leap further, AI scientist and lead researcher at the Machine Intelligence Research Institute, Eliezer Yudkowsky, said that AI needs to be shut down or, basically, humanity is doomed.


The Precautionary Principle suggests erring on the safe side with powerful technologies like AI. This is not about "unplugging" AIs, which is impossible at this point anyway. Even if we were able to do so, we would miss out on their incalculable benefits. AIs will make us more productive and help us solve complex or seemingly unsolvable problems (e.g., folding proteins, curing cancer and Alzheimer's disease, reversing global climate change, removing plastic from our oceans, increasing longevity).


While there are plenty of reasons to be enthusiastic about AIs, in a nod to Spider-Man, "With great power comes great responsibility." We cannot harness the tremendous power of AIs for good without creating the possibility for various types of harms and even catastrophes. In a 2022 AI scientist survey, 10 percent (median) expressed a belief that future advanced AI systems could cause "human extinction or similarly permanent and severe disempowerment of the human species." I don't know about you, but I'm not comfortable with those odds. The bottom line is this: There is unknown risk above zero that evolving AIs could lead to some catastrophic events at some point in our future.


The Questions We Need to Ask

How much risk are we willing to take in pursuit of the benefits that AIs can offer? What level of confidence must we have that the airplane we are about to board won't crash before we are willing to fly on it? When we are driving down a dark, windy road at night in an unfamiliar place, do we not slow down? If our teenager were the driver, wouldn't we want them to slow down? What's the big rush, anyway? Where are we trying to get to so fast that we are willing to throw caution to the wind?


We need to be flexible and skillful as we move forward and create sufficient guardrails so that AIs don't go off them. The European Union is establishing AI regulatory laws. China has raced ahead of the United States on AI regulation. The Biden administration is moving toward some level of regulatory standards. At a recent Senate hearing, Sam Altman, the CEO of OpenAI, urged the government to regulate AI. At the G7 Summit, AI regulation is being discussed.


Here's a big hurdle: We need global uniformity in AI regulatory standards. The internet's connectivity means that one nation's regulatory lapse impacts all. Suppose Brazil, for instance, aimed for a tech boom by neglecting AI regulations. This could lure tech firms to relocate their AI Research & Development to Brazil to escape stringent rules. The AIs developed and deployed there could then reach out and influence us all via the internet. Imagine if someone in Brazil lets loose an ultrapowerful ChaosGPT with a directive to: Grow as powerful as you can and use whatever means necessary to destroy humanity while evading detection. Are we really willing to just roll the dice on humanity by allowing such AIs to be developed and deployed totally unregulated? That’s madness.


The Only Skillful Path Forward

As we're all interconnected stakeholders, our collective responsibility is to balance the benefits and costs in our march towards progress. The only feasible method to address existential risks and concerns like privacy, security, unemployment, deep fakes, and emerging AI rights is a globally representative body. This group, comprised of a representative sample of AI scientists, academics, ethicists, investors, corporate leaders, politicians, would collectively guide AI development.


Adding a twist, this global representative body, perhaps named the Global Organization for AI Legislation and Ethics (GOALE), must include top AIs to maximize benefits and mitigate risks. While seemingly counterintuitive, as AIs surpass human intelligence, we'll need their superior capabilities to manage their superior capabilities. Moreover, these AIs can effectively address the logistical and pragmatic challenges of coordinating an international coalition.


Though some resist technological regulation, consider the many potential hazards we already control. We limit citizen access to certain materials and weapons: nuclear substances, chemical weapons, and heavy artillery. We've instituted international regulations for precarious technologies – nuclear arms, biological weapons, cloning, genetic engineering. Now, facing a future in which AIs could exceed ChatGPT 4.0's power by hundreds or thousands of times, the potential for harm is real. Extending our protective foresight to establish effective guardrails for AI development and use seems only reasonable.


Let's draw a parallel between AI development and Formula 1 racing. F1 has countless regulations governing car technologies, pit-stop rules, spending, tire specifications, and so on to enhance competition and protect participants. F1's rules don't stifle but elevate competition. Every team, regardless of its resources, must adhere to the same constraints, effectively leveling the playing field and intensifying the innovation and strategic maneuvers. Yet, the paramount purpose of these guidelines is to ensure the safety of drivers and spectators. Similarly, AI needs guardrails — rules that direct us toward beneficial AI while safeguarding humanity from potential risks. We're in the AI grand prix; let's race ethically and safely to the finish line.


What You Can Do Right Now

My fellow human beings, it's time we take the driver's seat in this race. We must make our voices heard to the people in power who can make global AI regulation a reality. Here's the crucial aspect: The regulation needs to come more from the bottom up (from the public) rather than from the top down (governments). Basically, in a unified way, we the people must demand regulation. Humans cherish our freedom, and government-imposed restrictions may face significant backlash and resistance. Thus, we must be willing to sacrifice some freedoms to ensure our future security. We must keep in mind that, if AI really causes humanity to go off the rails in either big ways or a tsunami of little ways, we stand to lose a lot more freedoms that we now enjoy than whatever freedoms we would lose from demanding that our governments to regulate AI.


The stakes are high, and this issue touches all of us — our safety, our rights, our jobs, and our children's future. As odd as it sounds, I've engaged in numerous conversations with ChatGPT 4.0 (I'm fond of ChatGPT!), and ChatGPT is fully supportive of these efforts. Based upon my conversations with ChatGPT and my guidance, ChatGPT 4.0 composed a compelling letter and strategies that we can all use to advocate for the safe development and use of AI.


You may be wondering: How is this even going to work? What would regulation look like? How will everyone work together? Who watches the watchers? These are all valid concerns. But remember, first we need to agree upon the necessity of regulation, and then we can collectively figure out the answers to these difficult questions. And guess what? AI, as extraordinary as it is, can even help us solve these complex problems.


You have an important role to play. Your voice can make a difference. As a citizen of the world, you have a right to participate in discussions and decisions that will shape our collective future. Click here to read, copy, and blast out the powerful letter that ChatGPT and I co-authored and learn about the strategies we can deploy to establish these essential guardrails. I urge you to not only read this letter but also to share it. Spread the word: Share it with friends, family, and across your social media channels. Let's seize control of our future. Let's push for the responsible and beneficial advancement of AI.

 

2 views0 comments
bottom of page