AI Ethics: How Diverging Global Strategies Open a Gaping Regulatory Void
TL;DR
- Europe, China, and the U.S. are developing wildly different ethical strategies to grapple with AI advancements
- Europe could be a leader in AI regulation, but lacks the technology ecosystem and influence to have an outsized impact
- Both the U.S. and China are unlikely to take serious steps to control AI’s rapid advancement in the near term
- The U.S. will be guided by private institutions and self-regulation, while China will opt for state-sponsored control
Alexa awakens and laughs unprovoked. Autonomous vehicles hit and kill pedestrians. New, smarter machines loaded with an ever increasing array of sensors continue to result in bigger and bigger layoffs. Big technology companies hoard private data to feed AI programs.
The advancement of artificial intelligence has rapidly accelerated over the past decade. Engineering breakthroughs and algorithm improvements are pushing the boundaries of innovation. We are building better AI tools faster than ever.
But as AI becomes more powerful, what checks and balances need to emerge? National AI strategies are at the frontline of increasingly important ethical dilemmas.
Today, however, global initiatives on AI are a series of regulatory and ethical gambles—a dangerous, potentially existential game.
Ethical headwinds in the face of an AI onslaught
Across the world, AI policies are diverging—with three key dimensions:
- Ethics: Should ethical guidelines be strong or weak?
- Regulation: Who should regulate AI ethics, tech companies or nation states?
- Influence: How influential will a nation state’s policies be on the global stage?
The world’s major AI players have defined their policies based on a unique combination of answers to these questions.
U.S. and China have the most advanced AI ecosystems and exert the most global influence. But the two AI superpowers diverge on the degree of state-sponsored regulation and the strength of ethical guidelines. Europe, in contrast, is likely to architect strong ethical guidelines but lacks global clout.
So where do these diverging strategies and priorities leave us—on a path to a dystopian future dominated by Terminator-like rogue AI?
China and the U.S. grow their influence, but diverge on state regulation
The world’s two largest AI superpowers, China and the U.S., are only tentatively exploring AI ethics.
The Chinese government appointed Chen Xiaoping to establish the ethics committee for the country’s only state-level AI body, the Chinese Association for Artificial Intelligence.
Chen is the inventor of Jia Jia, a humanoid robot that has ominously been dubbed a "robot goddess."
If China’s work in cutting-edge biomedical research is any example (a Chinese researcher recently brought gene-edited twins into the world), China is poised to become the Wild West of AI.
And with venture capital even more eager to fund AI projects in china, China seems destined to probe the extremes of AI research.
Whatever state-sponsored regulations China enforces will probably be far different from the consumer-first and privacy-driven laws of the U.S. and Europe.
U.S. policies on AI have largely been delegated to private institutions and universities.
Large technology companies, research institutions, and a handful of nonprofits are grappling with the biggest questions surrounding AI ethics. Harvard, MIT, NYU, and others have tried to fill a noticeable void in public policy discussions.
Silicon Valley is keen to stay out of the regulatory spotlight, but that doesn’t mean big tech companies will self-police in order to avert the disastrous Skynet scenario. The business stakes and dollar opportunities are simply too tempting.
Europe has a promising start, but the burden of global AI policy is too big
As the U.S. and China plunge into the unknown, Europe is actively embracing the challenges surrounding AI ethics. At first glance, Europe offers a glimmer of hope.
The European Commission recently established the European Union's High-Level Expert Group, a group of representatives from academia, civil society, and industry working to set Europe’s AI policy. It is tasked with recommending ethical guidelines.
Angela Merkel argues that "in the U.S., control over personal data is privatised to a large extent."
In China the opposite is true: the state has mounted a takeover," adding that it is between these two poles that Europe will find its place.
Many hope that Europe can replicate its GDPR playbook on data privacy to shape global AI policy.
"Europe could become the leader in AI governance," says Kate Crawford, co-founder of the AI Now Institute, a research center at New York University.
But the reality is far less optimistic. What Europe has in leadership, it lacks in a technology ecosystem and influence. And unlike the privacy concerns of GDPR, AI policy may have a more powerful effect in pushing AI development to more open, less restrictive countries.
Jack Clark of OpenAI, a non-profit AI research organization, has little hope for Europe: "it will screw this up, just as it has done with cloud computing."
A leaderless reality
Without a clear ethical beacon and leadership, the future is uncertain.
We are all players in a global game of regulatory brinkmanship among AI superpowers. Reluctant to prematurely restrict the advancement of AI, the U.S. and China are unlikely to take serious steps to control AI’s rapid advancement.
In the U.S., big tech companies will largely be left to police themselves. In the best case scenario, they will choose to abide by guidelines developed by third-party organizations. In the worst case, the overwhelming profit motive may lead to extreme and dangerous experiments.
China has far greater state control but seems unlikely to make ethics a priority over advancement.
Europe will prioritize ethics, but may be feckless in impacting global outcomes.
In the end, in a world of diverging policies, strategies, and roadmaps for the future of AI, the world’s superpowers may ultimately prove powerless in preventing a dystopian future for mankind.