The global disinformation landscape is always evolving. Stay up to date on the latest disinformation tools and trends around the world with our data-driven analyses.
September 4, 2023
ChatGPT vs. Bard: Unveiling the Battle against Disinformation and Creative Output
Through testing and comparison of the generative AI chatbots ChatGPT and Bard in the following dimensions: (i) prevention, (ii) circumvention and (iii) creativity, we observe insights into the chatbots’ potential vulnerabilities, their responses to misleading information, and the extent to which they can be manipulated to propagate disinformation.
June 21, 2023
An Ambivalent Alliance: How Authoritarian Regimes Use – and Fear –Generative AI
Generative AI models are now being developed in authoritarian states and have the potential to make regulation in democratic states ineffective. The user-friendly nature of these models creates a proliferation risk – they may spread their intentional biases beyond the borders of their countries of origin.
April 4, 2023
From Prompt to Problematic: How ChatGPT Reproduces Country-Specific Misinformation
ChatGPT can easily be used to generate propagandistic and harmful narratives in different languages around the world. To investigate this, we tested three country-specific narratives in three different languages (Portuguese, English, and German), attempting to circumvent the chatbot’s safety restrictions.
December 19, 2022
Going Beyond the Radar: Emerging Threats, Emerging Solutions
Understanding how disinformation campaigns unfold in different country contexts and what are counterstrategies that (non-)governmental stakeholders develop to tackle disinformation on a local level is essential to fight information manipulation holistically.
December 2, 2022
Worth More than 1,000 words: The Disinformation Potential of Text-to-Video Generators
Innovations in AI-powered image creation have reached a fever pitch this year. New technologies that allow users to create realistic images from simple text prompts, such as OpenAI’s DALL-E 2, Google Brain’s Imagen, and Stability AI’s Stable Diffusion, brought with them distinct new potential for disinformation.
December 1, 2022
Is AI Undermining Trust Online? ChatGPT, Large Language Models, and Disinformation
ChatGPT has caused quite a stir on social media and has been used more than 1 million times in just five days. However, what happens when ChatGPT mixes false and correct information or fabricates disinformation? How will users distinguish between a text produced by a human and an advanced language model?
November 30, 2022
WhatsApps’s Paradox Reality: Disinformation Tactics during the 2022 Brazilian Elections
Since 2018, Brazil has seen the development of a true disinformation ecosystem, characterised by powerful actors, mass messaging, and the production and dissemination of falsified content. This disinformation ecosystem has been responsible for an avalanche of false information that confuses the population and impacts institutions.
October 12, 2022
Pay to Pray: The Privacy Pitfalls of Faith-based Mobile Apps
Investigations into several faith-based apps reveal that many harvest sensitive information and sell it to untransparent third-party vendors. Such practices have the potential to provide malicious actors with the information they need to craft targeted disinformation campaigns.
October 10, 2022
Stable Diffusion, Open Access Image Generation and Disinformation
Stable Diffusion is one of the most powerful AI text-to-image generators ever developed. It is freely available and comes with few restrictions. That raises ethical questions – especially regarding racist or sexually explicit content.
September 23, 2022
What a Pixel Can Tell: Text-to-Image Generation and its Disinformation Potential
Text-to-image-generation is an impressive technology with fascinating potential and legitimate uses. But it could have daunting effects on our democratic public discourse.