Start spreading the news


A social data science expert explains how to tackle the grizzly problem of fake news.

Everybody’s discussing the problem, when we should be focusing on the solution instead. The bottom line is: the tools that cause the uncontrolled spread of fake news are also the ones that can solve it.

[Translate to English:] Alessandro Bozzon, TU Delft

Alessandro Bozzon, social data science expert at TU Delft, has got something he wishes to add to the trending fake news discussion. On the morning of our interview, I present him with some of my notes on the topic that include simple stuff like bots on Twitter and downright daunting developments like biopsychological social profiling. It almost made me delete all the social media apps on my phone. I’ve done my research on the topic and read several articles from renowned real news sources on doom scenarios. In fact, almost all articles I could find read like an abstract of a movie script. Alessandro starts to smile. “Have you seen the movie “Citizen Kane”? Fake news is nothing new. It’s only happening at a bigger scale.”

So there's no problem?

Your account of the problem is factually correct. There is a problem there, and the good thing is that it has a label that people can recognise. There are indeed people out there with a malicious intent to reach a given goal; and they use ‘our’ technical tools to achieve it. The problem does not only have a social component. It’s not only word of mouth, like gossip or urban myths. It has a social-technical label. The good thing about a problem with this sort of label is that you might not have to focus too much on the technical issues in order to validate the problem. The solution, at least for fake news, is actually right in front of us.

And what's the solution?

The fake news epidemic is happening in an information system where there are people involved, not just machines. But while in the past people where only consumer or producer of data, I believe it is time for people to have an active role in such information systems, to counteract the possible issues with a controlled combination of machine and human intelligence.
Of course, by doing that we need to account for the inefficiencies, biases and every other characteristic that people bring. So we need to develop new classes of computational systems that are able to put machine and human intelligence together. That is my research line. You could apply it in a lot of domains where information systems are at center stage, to a variety of applications: from crowd sensing in smart cities to online news verification.

So a need for human intervention? That's something you don't often hear from a computer scientist.

My personal topic is crowd computing. I advocate that there will always be classes of computer science problems that require human intelligence to have an active role within computational processes. And if you want these problems to be solved at scale, then you need to understand how to involve people at scale. Which is a computational problem in itself. But instead of dealing only with electronic computers, you are also dealing with human computers.

Facebook is getting a lot of criticism for that, putting people (i.e. editors) in the mix, who decide what is real news and what's not?

Facebook applies crowd computing techniques to deal with terrorist content that tries to reach wider audience on the platform. They employed over 4000 community operators to review content when artificial intelligence is not able to filter undesirable content. With the problem of fake news, Facebook seems to adopt a different tactique, where controversial content is delegated to an external editorial panel. In this way, it takes up to two weeks for a piece news to be verified. But it is verified, so in that sense the criticism towards Facebook is invalid. However, Facebook continuously says “We just distribute news and if there’s something that our system flags as a potential problem, we ship it to an authoritative third party. It’s not our problem. We don’t have the capability to do it so we leave it to someone else”. I understand Facebook’s angle. They are sort of handling the problem a little bit, but they preferred not to react, and send it to another party. It takes the pressure off Facebook a bit, but I don’t think this is the best they could do. One could argue that fake news is less of a problem than the distribution of terrorist content. But we know that fake news distributed for weeks can lead to serious issues.

So there is something better than Facebook - or any other social network - can do, while remaining an impartial news distributor?

Yes, absolutely. It’s actually what I’ve been doing since 2011 with my crowd computing research. It’s the idea to use social networks to reach a broader audience of potential contributors and thus to distribute micro work. The same system that allows people with malicious intent to profile people to subject them to fake news, is the same system that you can use to detect and judge such fake news.

We know that Facebook can be used to actively crowdsource. For instance, when tragic events happen somewhere in the world, they ask people in the area (or related to people in the area) to report on the their status. You can use the same technique to ask a pre-selected users to pass judgement on a piece of news. There’s a lot of knowledge that is latent in the crowd. A lot of people can judge if a news piece smells fishy. Therefore you could rely on the reason of the crowd to do something like this for verification purposes.

How could this work in practice?

I’ll give you a simplification of a complex technical solution. Imagine that a social network has a stream of news items coming in. That stream first goes through an automatic system – a set of machine learning algorithms that basically decide whether items can already be labelled as ‘good’ or not, by exploiting some non-ambiguous properties (signals) of the content. For instance, its origin (blacklisted websites are easy to detect) or language characteristics (for instance, excessive use of uppercases and punctuation). If the system gives the news a pass (or a non-pass) with high confidence, it will be published (or, respectively, not published). But the system might not have high confidence in its outcome. It might have a lot of difficulty analysing complex articulations, or satire or sarcasm. This is when a new class of crowd computing algorithms could be brought into play to smartly select the right people in the online community to have a look at a particular piece of news. The system knows their biases, political beliefs, language skills, educational level, etc., thus guaranteeing that all points of view are represented. They get a message on the respective social network: ‘Please help us to assess if this is real news or not: yes/no/unsure.’ Based on this poll the system can decide, with another set of crowd computing algorithms, if the news will be published. If there’s still any doubt, it can be sent to an authoritative party, like the one currently in use. But you try to tackle as much as you can in a volume by relying upon a social network’s most important asset: people.

We should rely upon a social network’s most important asset: people

But will these people cooperate?

If the right mix of incentives is applied, I think they will. And I’m not referring only to monetary incentives; that might not guarantee a scalable solution. Even Facebook has a budget for “community operation” employees. But other incentives like meaning, empowerment and social influence could play a role. Social media allow people to organise themselves in online communities. But being in a community also brings responsibilities, like in the real world. Facebook already has the tools required to support and promote this sort of responsible behaviour.

Why are you so passionate about crowd computing?

I’d like to see a society where computational thinking and computer systems are at the service of people. I’m a computer scientist, so I like the idea of computers taking over tedious tasks. However, I’d like this to be for the good of humanity. At some point I understood that, in order for this to happen, you need to involve people within information systems, and not only threat them as producers or consumers of content. And I am not only referring to the ability of programming computers. Many of the information systems that run our society nowadays are just a handful of lines of code, and machine learning does the rest. No, if you want to computerise society in a fair way, and you acknowledge that the solution to certain problems must involve human intelligence, then you should study how people and machines can perform computation together.

We have to look at computer science with a positive attitude, to solve the problems that new computer technologies might bring. Because many technologies, like social media, are here to stay.


A week after our interview, Alessandro brought to my attention a new initiative by the creators of Wikipedia: Wikitribune. “The news is broken and we can fix it.” It sounds very much in line with Alessandro’s take on fake news. They’d also like to rely on the reason of the community to help verify news.


Text: Marieke Roggeveen | Photo: Mark Prins