A Machiavellian machine raises ethical questions about AI

Capture investment opportunities created by megatrends

A Machiavellian machine raises ethical questions about AI

29 November 2022 Technology & Digitalization 0

The writer is a science commentator

I remember my daughter’s first fib. She stood with her back to the living room wall, crayon in hand, trying to conceal an expansive scrawl. Her explanation was as creative as her handiwork: “Daddy do it.”

Deception is a milestone in cognitive development because it requires an understanding of how others might think and act. That ability is on display, to a limited extent, in Cicero, an artificial intelligence system designed to play Diplomacy, a game of wartime strategy in which players negotiate, make alliances, bluff, withhold information, and sometimes mislead. Cicero, developed by Meta and named after the famed Roman orator, pitted its artificial wits against human players online — and outperformed most of them.

The arrival of an AI that can play the game as competently as people, revealed last week in the journal Science, opens the door to more sophisticated human-AI interactions, such as better chatbots, and optimal problem-solving where compromise is essential. But, given that Cicero demonstrates AI can, if necessary, use underhand tactics to fulfil certain goals, the creation of a Machiavellian machine also raises the question of how much agency we should outsource to algorithms — and whether a similar technology should ever be employed in real-world diplomacy.

Last year, the EU commissioned a study into the use of AI in diplomacy and its likely impact on geopolitics. “We humans are not always good at conflict resolution,” says Huma Shah, an AI ethicist at Coventry University in the UK. “If AI could complement human negotiation and stop what’s happening in Ukraine, then why not?” 

Like chess, the game of Diplomacy can be played on a board or online. Up to seven players vie to control different European territories. In an initial round of actual diplomacy, players can strike alliances or agreements to hold their positions or move forces around, including to attack or to defend an ally.

The game is regarded as something of a grand challenge in AI because, in addition to strategy, players must be able to understand others’ motivations. There is both co-operation and competition, with betrayal a risk.

That means, unlike in chess or Go, communication with fellow players matters. Cicero, therefore, combines the strategic reasoning of traditional games with natural language processing. During a game, the AI works out how fellow players might behave in negotiations. Then, by generating appropriately worded messages, it persuades, cajoles or coerces other players into making partnerships or concessions to execute its own game plan. Meta scientists trained Cicero using online data from about 40,000 games, including 13mn in-game messages.

After playing 82 people in 40 games in an anonymous online league, Cicero ranked in the top 10 per cent of participants playing more than one game. There were hiccups: it sometimes spat out contradictory messages on invasion plans, confusing participants. Still, only one opponent suspected Cicero might be a bot (all was revealed afterwards).

Professor David Leslie, an AI ethicist at Queen Mary University and at the Alan Turing Institute, both in London, describes Cicero as a “very technically adept Frankenstein”: an impressive stitching together of multiple technologies but also a window into a troubling future. A 2018 UK parliamentary committee report advised that AI should never be vested with “the autonomous power to hurt, destroy or deceive human beings”.

His first worry is anthropomorphic deception: when a person wrongly believes, as one opponent did, that there is another human behind the screen. That can pave the way for people to be manipulated by technology.

His second concern is AI equipped with cunning but lacking a sense of fundamental moral concepts, such as honesty, duty, rights and obligations. “A system is being endowed with the capacity to deceive but it is not operating in the moral life of our community,” Leslie says. “To state the obvious, an AI system is, at the basic level, amoral.” Cicero-like intelligence, he thinks, is best applied to tough scientific problems like weather analysis, not to sensitive geopolitical issues.

Interestingly, Cicero’s creators claim that its messages, filtered for toxic language, ended up being “largely honest and helpful” to other players, speculating that success may have arisen from proposing and explaining mutually beneficial moves. Perhaps, instead of marvelling at how well Cicero plays Diplomacy against humans, we should be despairing at how poorly humans play diplomacy in real life.