This year’s theme is “Love Out Loud,” but what people are really talking about is hate and fake news. Such is the mood these days as entrepreneurs, politicians and experts gathered in Berlin this week for the annual Re:Publica conference, one of the world’s leading summits on digital society.
The topic is particularly relevant in Germany, where the government has tabled legislation to penalize social networks like Facebook and Twitter with fines as high as €50 million, or $53 million, for spreading either hate speech or fake news.
“You can only counter falsehoods with facts,” said Miriam Mogge of start-up Factfox, an automated software that provides social-media managers with the suggested answers to comments that attempt to spread falsehoods. Bavarian public broadcaster Bayerischer Rundfunk is already using their software. Ms. Mogge says the business, which was founded this year, is already talking to other editorial teams.
Comments on social-media platforms are just one part of the problem. Fake news encompasses a whole range of issues, according to Jillian York, a network activist at the Electronic Frontier Foundation, which focuses on the basic rights in the digital era. She says the “black and white” thinking created by the fake news debate doesn’t exist. “Fake news varies from state propaganda to pages that earn their money from mouse clicks,” Ms. York said.
One person can manage thousands of fake news bots and exert substantial influence on discussions.
Businesses are further challenged by machines spreading targeted disinformation. “In confusing news situations, such as a terror attack, disinformation can have a major effect,” explained Tabea Wilke, the founder of botswatch. Ms. Wilke’s initiative takes aim at opinion robots, which hide behind profiles in social networks. These digital robots automatically publish articles or establish contact with other users. One person can manage thousands of these bots and exert substantial influence on discussions, warned Ms. Wilke.
The power these bots have – and who controls them – is what has German politicians particularly worried following the shock elections that resulted in Brexit and US President Donald Trump’s election. In both cases, fake news and propaganda that strayed from the truth were key influences in citizens’ decisions.
As Germany heads into a federal election in September, policymakers are working on legislation that would force social networks like Facebook and Twitter to tackle the problem head on. “This law sets out binding standards for the way operators of social networks deal with complaints and obliges them to delete criminal content,” Justice Minister Heiko Maas said when he announced the planned legislation in March. Germany already has some of the world’s toughest hate speech laws, but Chancellor Angela Merkel’s lawmakers are scrambling to combat the issue in the digital landscape before the country goes to the polls.
French news agency AFP reported earlier this year that a Russian disinformation campaign was focusing its attacks on Ms. Merkel. Most attacks spread false information regarding refugees. The chancellor’s decision to let more than 1 million migrants into Germany has proven to be a thorn in the chancellor’s side. The far-right and Russia-favored Alternative for Germany (AfD) party has pushed on the issue throughout its campaigns into state legislatures.
Germany’s refugee influx that spanned 2015 and 2016 has also spurred hate speech in online forums. The European Research Center for Information Systems (ERCIS) took notice that the challenge was effectively stifling debate, as many German media shut down their comment sections at the height of the refugee debate. “The whole discussion moved into social networks – depriving the media of an opportunity to influence the discussion, not to mention the loss of traffic and resulting revenues on their websites,” said Sebastian Köffer of ERCIS.
“There is a lot of talk about using artificial intelligence or algorithms to find and evaluate dangerous content.”
The ERCIS researchers used the refugee crisis in their “Cyberhate Mining” project. Through automated text scanning online, Mr. Köffer and his colleagues used 375,000 comments on 20,000 articles from 14 different news sites to teach an algorithm how to identify hate. While the algorithm works, there is still a lot of work for its human handlers to do. “To teach the algorithm, we need people to evaluate the comments – and it’s not easy to get objective algorithms,” said Dennis Riehle, another researcher on the project. ECRIS hopes that the algorithm can then support people who have to weed hateful speech out of the comments sections.
ECRIS is just one of many organizations using algorithms to help battle the problem, but Ms. York says that it brings up another issue. “There is a lot of talk about using artificial intelligence or algorithms to find and evaluate dangerous content, but there is a lack of experts involved in the process,” she said.
That lack of experts is what is pushing Facebook’s efforts against fake news into human hands. In Germany, users can flag articles as fake, which are forwarded to Correctiv, a journalist-led initiative that will fact check and flag “disputed” stories to give them less reach. Facebook has also said that it plans to fine the distributors of fake news.
Despite the number of people converging in Berlin this week, there still is debate about the scope of the problem. “There are still not enough meaningful studies about the effects and the actual distribution of fake news,” said Astrid Carolus, a media psychologist. At Re:Publica, which ends Wednesday, nobody sees this as a reason to be inactive.
Johannes Steger covers companies and markets for Handelsblatt from Düsseldorf. Sabine Devins is an editor with Handelsblatt Global in Berlin. To contact the authors: firstname.lastname@example.org and email@example.com.