On the regulation of AI

It seems so futile the attempt of trying to regulate AI, something that doesn’t even truly exist yet. We don’t have AI we can call sentient yet. The rationale is well-founded, but what we’re really trying to say is, “We know we can make something better than us in every way imaginable, so we’ll limit its proliferation so that humans are superseded not by AI, but by our own demise.”

So after the many times this has been done ad nauseum, it looks like the “Future of Life Institute” (as if they were gods who possibly have any power to control the ultimate fate of humanity!) have disseminated the Asilomar AI Principles (Asilomar is just the place the meeting was held. Apparently, these astute individuals really like the beach, as they had gone to Puerto Rico in their previous conference two years prior). They have garnered thousands of signatures from prestigious, accomplished AI researchers.

The Asilomar Principles are an outline of 23 issues/concepts that should be adhered to in the creation and continuation of AI. I’m going to take it apart, bit by bit.

 

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

What is “undirected intelligence”? Does this mean we can’t throw AI at a big hunk of data and let it form its own conclusions? Meaning, we can’t feed AI a million journals and let it put two and two together to write a literature review for us. And we can’t use AI to troll for us on 4chan.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

They throw this word “beneficial” around but I don’t know what exactly “beneficial” means. Cars are beneficial, but they can also be used to kill people.

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

You get programmers to stop writing lazy, dirty, unoptimized code that disregards basic security and design principles. We can’t even make an “unhackable” website; how could we possibly make an AI that is “unhackable” at the core?

  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?

You can’t. Robots replace human capital. The only job security that will be left is programming the robots themselves, and even AI will take care of patching their own operating systems eventually. Purpose – well, we’ve always had a problem with that. Maybe you can add some purpose in your life with prayer – or is that not “productive” enough for you?

  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

Legal systems can’t even cope with today’s technology. Go look at the DMCA: it was made decades ago, back in the age of dial-up, and is in grave need of replacement to make the system fairer. You can post videos within seconds today that most likely contain some sort of copyrighted content on it.

  • What set of values should AI be aligned with, and what legal and ethical status should it have?

Most likely, they will be whatever morals the AI’s developers personally adhere to. Like father, like son.

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

Like lobbying? I don’t think I’ve ever seen “constructive and healthy exchange” made on the Congressional floor. Dirty money always finds its way into the system, like a cockroach infestation.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

Doesn’t this apply to pretty much everything research-related? Oh, that’s why it’s titled “research culture.” I’ll give them this one for reminding the reader about common sense.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

I almost interpreted this as “AI should avoid being racist.” Anyhow, this is literally capitalism: competing teams will cut corners and do whatever they can to lead in the market. This is probably the liberal thinking of the researchers leaking into the paper: they are suggesting that capitalism is broken and that we need to be like post-industrial European countries, with their semi-socialism. In a way, they’re right: capitalism is broken – economic analysis fails to factor in long-term environmental impacts of increases in aggregate supply and demand.

Ethics and Values

Why do they sidestep around the word “morals”? Does this word not exist anymore, or is it somehow confined to something that is inherently missing from the researchers?

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

“Safety first.” Okay…

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

You want a black box for your AI? Do you want to give them a room where you can interrogate them for info? Look, we can’t even extract alibis from human people, so how can we peer into AI brains and get anything intelligible out of them?

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

This is not a place where AI should delve into, anyway. We will not trust AI to make important decisions all by themselves, not in a hundred years.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

Meaning you want to be able to sue individual engineers, rather than the company as a whole, for causing faults in an AI. Then what’s the point of a company if they don’t protect their employees from liability?!

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

What if AI finds itself to align better to values than humans? What if the company that made an AI got corrupt and said to themselves, “This AI is too truthful, so we’ll shut it down for not aligning to our values.”

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Debatable topics like abortion come to mind. Where’s the compatibility in that?

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

Again, we don’t even have control over this right now, so why would we have control over it in the future with AI?

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

And it probably will “curtail” our liberty. Google will do it for the money, just watch.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

What a cliche phrase… ohhh. It’s as if I didn’t include this exact phrase in my MIT application, not considering how gullible I am to not realize that literally everyone else had the exact same intentions when they applied to MIT too.

When Adobe sells Photoshop, is it empowering people to become graphic artists? Is it empowering everyone, really, with that $600 price tag? Likewise, AI is just software, and like any software, it has a price tag, and the software can and will be put for sale. Maybe in 80 years, I’ll find myself trying to justify to a sentient AI why I pirated it.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Reminds me of the imperialist “Greater East Asia Co-Prosperity Sphere.” Did Japan really want to share the money with China? No, of course not. Likewise, it’s hard to trust large companies that appear to be doing what is morally just.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

I can’t tell Excel to temporarily stop turning my strings into numbers, as it’s not exactly easy to command an AI to leave a specific task to be done manually by the human. What if it’s in a raw binary format intended to be read by machines only? Not very easy for the human to collaborate, is it now?

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

I think at some point, the sentient AI will have different, more “optimal” ideas it wants to implement, or shut down entirely.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Tell that to our governments, not us. Oops, too late, the military has already made such weapons…

Longer-term Issues

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

“Assumptions” including this entire paper. You assume you can control the upper limit of AI, but you really can’t.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

You don’t say.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

Because such efforts show that human labor is going to be deprecated in favor of stronger, faster robotic work…?

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Every person will have their own “superintelligence.” There will not be one worldly superintelligence until the very end of human civilization, which ought to be beyond the scope of this document, since we obviously can’t predict the future so far.

 

You can make pretty documents outlining the ideals of AI, but you must be realistic with your goals and what people will do with AI. Imposing further rules will bring AI to a grinding halt, as we quickly discover the boundaries that we have placed upon ourselves. Just let things happen, as humans learn best from mistakes.

Leave a Reply

Your email address will not be published. Required fields are marked *