Regulating artificial intelligence is a 4D challenge

Capture investment opportunities created by megatrends

Regulating artificial intelligence is a 4D challenge

25 May 2023 Technology & Digitalization 0

The writer is founder of Sifted, an FT-backed site about European start-ups

The leaders of the G7 nations addressed plenty of global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: war in Ukraine, economic resilience, clean energy and food security among others. But they also threw one extra item into their parting swag bag of good intentions: the promotion of inclusive and trustworthy artificial intelligence. 

While recognising AI’s innovative potential, the leaders worried about the damage it might cause to public safety and human rights. Launching the Hiroshima AI process, the G7 commissioned a working group to analyse the impact of generative AI models, such as ChatGPT, and prime the leaders’ discussions by the end of this year.

The initial challenges will be how best to define AI, categorise its dangers and frame an appropriate response. Is regulation best left to existing national agencies? Or is the technology so consequential that it demands new international institutions? Do we need the modern-day equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and deter its military use?

One can debate how effectively the UN body has fulfilled that mission. Besides, nuclear technology involves radioactive material and massive infrastructure that is physically easy to spot. AI, on the other hand, is comparatively cheap, invisible, pervasive and has infinite use cases. At the very least, it presents a four-dimensional challenge that must be addressed in more flexible ways. 

The first dimension is discrimination. Machine learning systems are designed to discriminate, to spot outliers in patterns. That’s good for spotting cancerous cells in radiology scans. But it’s bad if black box systems trained on flawed data sets are used to hire and fire workers or authorise bank loans. Bias in, bias out, as they say. Banning these systems in unacceptably high-risk areas, as the EU’s forthcoming AI Act proposes, is one strict, precautionary approach. Creating independent, expert auditors might be a more adaptable way to go.

Second, disinformation. As the academic expert Gary Marcus warned US Congress last week, generative AI might endanger democracy itself. Such models can generate plausible lies and counterfeit humans at lightning speed and industrial scale. 

The onus should be on the technology companies themselves to watermark content and minimise disinformation, much as they suppressed email spam. Failure to do so will only amplify calls for more drastic intervention. The precedent may have been set in China, where a draft law places responsibility for misuse of AI models on the producer rather than the user. 

Third, dislocation. No one can accurately forecast what economic impact AI is going to have overall. But it seems pretty certain that it is going to lead to the “deprofessionalisation” of swaths of white-collar jobs, as the entrepreneur Vivienne Ming told the FT Weekend festival in DC. 

Computer programmers have broadly embraced generative AI as a productivity-enhancing tool. By contrast, striking Hollywood scriptwriters may be the first of many trades to fear their core skills will be automated. This messy story defies simple solutions. Nations will have to adjust to the societal challenges in their own ways.

Fourth, devastation. Incorporating AI into lethal autonomous weapons systems (LAWS), or killer robots, is a terrifying prospect. The principle that humans should always remain in the decision-making loop can only be established and enforced through international treaties. The same applies to discussion around artificial general intelligence, the (possibly fictional) day when AI surpasses human intelligence across every domain. Some campaigners dismiss this scenario as a distracting fantasy. But it is surely worth heeding those experts who warn of potential existential risks and call for international research collaboration.

Others may argue that trying to regulate AI is as futile as praying for the sun not to set. Laws only ever evolve incrementally whereas AI is developing exponentially. But Marcus says he was heartened by the bipartisan consensus for action in the US Congress. Fearful perhaps that EU regulators might establish global norms for AI, as they did five years ago with data protection, US tech companies are also publicly backing regulation. 

G7 leaders should encourage a competition for good ideas. They now need to trigger a regulatory race to the top, rather than presiding over a scary slide to the bottom.