On Friday 3 July 2020 I interviewed José van Dijck, after attending her talk “Connected against COVID-19, how networks spread and combat pandemics”. José van Dijck is a University Professor of Media and Digital Society at Utrecht University. Her research concerns media, social media and media technologies.
On Friday, July 3, I sat at my desk and waited for the clock to read 13:00. We had agreed with professor José van Dijck to call for an interview. Since March almost everything is done from my living room, including this interview. The Tuesday before, I attended a webinar of the KNAW on the topic “Connected against COVID-19, how networks spread and combat pandemics”, with José van Dijck being one of the four speakers.
What was the purpose of the webinar?
Frank den Hollander asked me because the webinar was about networks, and he said “that includes social networks as well”. During the previous months, I have been researching the effect of social media in the public debate just after the Corona outbreak. I do think that my interpretation of the subject was very different from the others, who were more focused on the spreading of infections due to human behavior, I focused on the spreading of information about the virus through people and technology. But that's the nice thing about such a webinar. Although we knew little about each other's research, I realized during the webinar that there is room for contamination of ideas, to keep it metaphorically. Ideas about how complex networks work. For us in the social sciences and humanities, it is incredibly difficult to understand how complex systems work in which technology as well as users and ideas play an important role. I find this relationship between technology, user and ideas in the distribution of information about the virus very interesting.
Is a webinar different from a presentation on location?
Yes, somewhat, you can improvise less; normally I can respond very quickly to the other speakers and the audience, but here you have to pay attention to too many things at once. So then you have to prepare very tightly.
Your research is about media, social media and media technologies. What are the central questions in your research?
Last Tuesday I discussed the role social media played in the public debate immediately after the Covid-19 outbreak in the Netherlands. Social media were used in the dissemination of urgent medical information, but were also instrumental in the dissemination of misinformation. I see social media as a double-edged sword.
Take as an example how, during the pandemic, WHO and RIVM very consciously used social media to convey certain health information to the widest possible audience, for example by means of influencers on YouTube. At the same time, Facebook, Twitter and YouTube in particular have been instrumental in the past months in spreading false information about the virus. The WHO has called this the infodemic.
My further research shows how administrators, politicians and journalists of news organizations counterbalance when they observe that social media is such a central position in the spread of misinformation. For example, in the NOS program “Corona, Feiten en Fabels”, the presenters asked questions from social media channels and presented them to experts. Experts counterbalance it by rationally saying something sensible about it. And that works well as an adjustment of the unfiltered info chemistry.
Do you think that at some point too much information can have unwanted effects?
Yes, that is exactly why the WHO has chosen the term infodemic. We also call it information obesity. It means that too much information is confusing, especially if the information is not specific enough and the sources contradict each other.
As a user of Twitter and Facebook, I mainly experience two aspects of social media, firstly informing and secondly discussing. My impression is that most of the users “exchange information locally”, but there is also a small fraction of superspreaders that have a long reach. Is this image of me correct?
Indeed, there are two models of diffusion, first is the superspreading model. In this model, there is a small number of users with a large numbers of followers, the superspreaders, who have been given the power of this social medium to reach a lot of people. But for the most part they are just groups of people who have dispersal power over their own manageable environment. Now it is not the case that the latter distribution method would be less harmful, because it can also have a very large impact. For example, a WhatsApp group can have a maximum of 256 members, and one message can be forwarded by the same person up to five times. Nevertheless you can still reach a lot of people through a chain reaction.
It's interesting to look at these two models of distribution, the superspreaders versus the small environments. Platforms such as Facebook, YouTube and Twitter determine themselves what is allowed and what is not allowed within such a context, they can use both the superspreading model and the WhatsApp model. After all, Instagram and WhatsApp are also owned by Facebook. These platforms thus have the power to adjust the number of distributors and their reach. Precisely to counter this effect of rapid spread. In addition, misinformation can be labeled as such and directly refer to correct information from WHO or RIVM — this is what Facebook, YouTube and Twitter did during the pandemic. So they have the means to slow down the spread of misinformation and to intervene when necessary.
It is important to mention that, in addition to the power to intervene, these companies also have their own incentive to spread misinformation, which is of course economic. They make money from mass distribution of information and advertisements. So it is against their own interest to stop all kinds of "viral" messages spreading. And that makes reality more complex.
The second aspect of social media is discussion, but unlike the distribution of information, online platforms seem less suitable for this.
I always say, social media are especially useful for one-way information traffic but totally inconvenient for dialogue. This is also partly due to the technological affordances, i.e. the way in which these media have been designed, for example in their interfaces. First, you can communicate on social media with very few characters; messages are short, concise and mainly work with images.
Emotions do much better on social media than arguments. Emotions are the primacy of social media, while rational discussions, arguments, nuance, come into their own much better in other media forms, in newspapers for example. Expressions that you have to think about, that require reflection and that invite dialogue are barely featured on social media. That's why we say social media is almost inherently instigating for polarization and flaming, heated discussions.
Social media can turn people against each other, but also encourage people to take joint action, movement. Social media can also have a very positive function; see, for example, how during the pandemic nurses, the sick, and doctors used home-made videos to share an emotional message via YouTube. A message of how they experienced that situation, for example, or to show the enormous effort involved in caring for patients. We should not underestimate this positive side of social media, it can offer a lot of comfort and bring people into contact with each other.
During the Corona crisis, was science also dealing with social media as a double-edged sword?
Absolutely, science is used to dealing with uncertainty within its own circle, especially if you are still researching something unknown, such as a virus. I think a good example is the ArXiv platform where papers can be found that are not yet finished. ArXiv aims to facilitate this, you can get comments on your research faster because more people have access there. An excellent idea, because you can reach more scientists faster who can participate in the discussion. What you see happening now is that non-scientists are using ArXiv to pluck out unfinished articles and to support some weird conspiracy theory. This is another example of how social media can contribute to polarization.
And what can we do about hate speech and conspiracy theories?
Firstly, tech companies can do something about the distribution channels, namely organize and adjust them in such a way that hate speech is much less likely to spread too massively and at tremendous speed. But users must also take responsibility themselves. The only opportunity I see here is to teach children a critical attitude from an early age, always learn to weigh information and learn to look in a scientific way at the sources and the interests behind information. I also think everyone should know a little about how social media works, how they work, how they manipulate users to some degree. You must realize that free is never free, but that you always pay with your own data and that has consequences for your privacy. This is what I call "media literacy". Actually, I see education and media literacy education as a vaccine against many forms of hate speech and polarization.
Do you work also with people from beta disciplines?
I sometimes work with computer scientists, for example, in figuring out how information spreads, how algorithms work and how their use can best fit into institutionalized environments. Algorithms are very important for social media to control the circulation of information and filter out erroneous information. And that is incredibly complicated because billions of messages per day go around the world via the internet
To conclude, do you also use social media yourself?
I've been involved with social media for about 12 years but I've never been on Facebook myself. I'm a person who keeps track of everything about social media, especially the political and economic aspects of it, but I don't want all my time to be lost by constantly checking and checking the latest feeds. I prefer to deal with arguments and reflection than tweets and likes.