Artificial intelligence

Better Safe Than Quick

Tay the robot with an account source Twitter
Tay, the chatbot who was given a Twitter account and went off the rails. Teaching a robot is like teaching an impressionable child, say users.
  • Why it matters

    Why it matters

    Companies are fighting to bring products to market using artificial intelligence, not always putting safety first.

  • Facts


    • Artificial intelligence first boomed in the 1980s and now is expected to transform one industry after the next.
    • More and more companies are buying AI firms to gain talent and technology. Most recently, Apple bought Turi, and Intel bought Nervana.
    • Last year, companies bought 37 AI startups, and this year, there have been 29 takeovers so far.
  • Audio


  • Pdf


Some people got a good laugh when a chatbot was given its own Twitter account – and was then transformed into a Holocaust-denying racist in 24 hours and swiftly taken offline.

The Microsoft chatbot – called Tay – is a piece of software which can communicate with others with no human involvement. This bot was equipped with artificial intelligence but was easily manipulated by Twitter users.

Teaching a robot works just like drilling an innocent, unknowing child, as artificial intelligence learns from the sum of its experiences, much like human beings. The more often a subject is talked about, opinions expressed, and certain wordings used, the more likely the software is to consider it normal and use it.

The case of the racist chatbot from earlier this year showed that artificial intelligence is still in its infancy.

That will come as no surprise to smartphones owners who can see with auto-correct technology, a form of artificial intelligence: If a word is frequently used inaccurately and not corrected, then the program no longer wants to correct it either.

The case of the racist chatbot from earlier this year showed that artificial intelligence is still in its infancy. There are already more refined, more intelligent programs than Tay, but none are infallible. For software applications, like the evaluation of user data, this is comparatively harmless, but the moment artificial intelligence actually has to control something, it becomes critical.

Industrial companies are extremely optimistic now that their machine parks can do more than perform just a handful of dull tasks. They no longer just adapt products individually to customers’ wishes; they are now designed to work hand-in-hand with workers in the factory.

The automobile industry would like their vehicles to be piloted by artificial intelligence.

Other manufacturers hope for a breakthrough for the internet of things, like for example, heating or in-house lighting, when these can be controled intelligently.

These ideas have in common the direct, physical influence on people. But one mistake in the automobile, a careless robot, a cold, dark house in the winter, all of that can harm the user – not digitally, but in real life.

So companies would do well to concentrate on offering the best possible product, especially in these areas of application, even if others are quicker to the market. The principle of trial and error does not apply here – no matter how often it is emphasized that companies cannot afford to be left behind by competitors.

Here, the best product is the safest product and that’s what will prevail.


To contact the author:

We hope you enjoyed this article

Make sure to sign up for our free newsletters too!