What’s more frightening to bosses of big companies than their staff leaving on mass? The machines taking over – or at least customers thinking they have.
And it’s a threat German managers are now taking seriously, fueled by concerns over artificial intelligence and a complete lack of regulation of the technology.
German comms giant Deutsche Telekom and business software group SAP are among the first to take steps to police their own capabilities in artificial intelligence, or AI, including automated helplines and voice recognition software. The moves are in line with a global trend to establish codes of conduct for a technology that is growing more powerful by the week.
Telekom is expanding its use of AI to save costs and improve its products. One system launched this year recognizes individual voices and allows customers to identify themselves on telephone helplines simply by saying the sentence “At Telekom my voice is my password.”
Telekom’s head of technology and innovation, Claudia Nemat, admitted that AI can pose risks. Algorithms, the pieces of software code that drive AI, can only be as good as the thinking that went into them and the data they process, she said.
Man versus machine
The company’s self-written and rather vaguely worded guidelines call for the technology to be used responsibly, transparently and securely. “We will always inform our customers when they are communicating with a machine instead of a person,” said Ms. Nemat, referring to so-called chatbots, programs capable of conducting conversations through text or audio.
The problem is that no one is making sure that Telekom and other firms stick to their guidelines. Customers simply have to trust the company to be transparent.
Like with Telekom, SAP’s rules are pretty vague. They include making sure that the technology is in harmony with SAP’s corporate values, that it serves humans and is used transparently. An advisory board will check that the company is adhering to them, but the firm will have the final say.
Such self-policing may not be around for long. Governments around the world are considering regulatory frameworks to rein in the power of machines.
The German government has set up a working group to explore the economic potential and social responsibility aspects of AI, and French President Emmanuel Macron made clear that his plan to promote AI would take account of “ethical and philosophical limits.”
You get me?
Fears are growing because the inexorable progress in algorithms, the expansion of computer capacity and the endless supply of data is on the brink of turning science fiction into reality: There’s a real prospect that machines could take control.
For example, Google’s bot “Duplex” is deceptively realistic in mimicking a human caller, right down to making sounds like “Mmhm.” And China is setting up a digital assessment system that won’t just check people’s creditworthiness but also their behavior.
It’s not just the technology that’s causing concern. It’s the power that AI is placing in the hands of Internet giants like Google, Facebook, Amazon and Microsoft. They’ve tried to pre-empt government regulation by setting up the “Partnership on Artificial Intelligence to Benefit People and Society” (PAI), which is aimed at providing an ethical basis for intelligent machines. It has 75 members, including companies, NGOs, Internet and human rights organizations from America, Europe, Japan and Korea. Chinese tech giant Baidu joined it this week.
Employees are also taking a role in exerting control. Google stopped a military project in response to strong protests from its workers. And hundreds of employees at Amazon wrote to CEO Jeff Bezos demanding Amazon stop selling facial recognition software to police and security authorities.
Christof Kerkmann writes about the technology sector. Stephan Scheuer is co-head of Handelsblatt’s feature and people’s desk. David Crossland adapted this article into English for Handelsblatt Global. To contact the authors: email@example.com, firstname.lastname@example.org