Cyber sex chat bots
But not all AI is easy to police with some varieties more traceable than others, says Nils Lenke, board member of the German Research Institute for Artificial Intelligence, the world’s largest AI centre.
Unlike traditional rules-based AI, which provides “intelligent” answers based on an ability to crunch mathematical formulae, learning algorithms based on neural networks can be opaque and impossible to reverse engineer, he explains.
As executive director of Seed Vault, a not-for-profit fledgling platform launched to authenticate bots and build trust in AI, Professor Shedroff thinks transparency is a starting point.
“Science fiction has for millennia anticipated the conversational bot, but what it didn’t foresee were surrounding issues of trust, advertising and privacy,” he says.
More work is needed to create AI accountability, however, says Bertrand Liard, partner at global law firm White & Case, who predicts proving liability will get more difficult as technology advances faster than the law.
At what point does a factory worker lose capacity because of a co-bot?
A cause for even greater concern, however, might be chatbots fronting an AI application capable of interpreting emotions.
Nathan Shedroff, an academic at the California College of Arts, warns the conversational user interface can be used to harvest such “affective data”, mining facial expressions or voice intonation for emotional insight.
“There are research groups in the US that claim to be able to diagnose mental illness by analysing 45 seconds of video.
Who owns that data and what becomes of it has entered the realm of science fiction,” he says.