Visit our Re-post guidelines
As government agencies and tech firms expand their efforts to combat 'disinformation,' the line between cybersecurity and content moderation blurs, raising alarming questions about free speech and civil liberties.
In recent years, the fight against online misinformation has intensified, with government agencies, tech companies, and cybersecurity firms joining forces to develop new strategies and tools. One such tool, the DISARM framework, has gained significant traction among major global organizations. However, its potential application to target specific individuals, coupled with the controversial "Disinformation Dozen" report, has raised serious concerns about the implications for free speech and civil liberties.
The DISARM Framework
DISARM (Disinformation Analysis and Risk Management) is an open-source framework developed to combat disinformation through coordinated action. Originally launched in 2019, it draws on cybersecurity best practices to help communicators understand disinformation incidents and identify potential countermeasures. The framework has been adopted by several high-profile organizations, including the European Union, United Nations, NATO, and the World Health Organization.
While DISARM's creators emphasize that it is descriptive rather than prescriptive, the framework's potential for targeting individuals has raised eyebrows. The mention of possibly including the "Disinformation Dozen" as targets in future updates is particularly concerning, given the controversial nature of that designation.
The "Disinformation Dozen" Controversy
In March 2021, the Center for Countering Digital Hate (CCDH) released a report titled "The Disinformation Dozen," claiming that just 12 individuals were responsible for 65% of anti-vaccine content on social media platforms. This report led to widespread condemnation of the named individuals and calls for their censorship, including a statement from President Biden accusing them of "killing people."
However, the report's methodology has been heavily criticized. Meta (formerly Facebook) disputed the findings, stating that the named individuals were responsible for only about 0.05% of all views of vaccine-related content on their platforms. Despite these flaws, the report's impact was significant, leading to the deplatforming of several individuals and influencing policy discussions.
Government and Intelligence Agency Involvement
The involvement of government agencies in efforts to combat "disinformation" has further complicated the issue. A 2019 paper from the Department of Homeland Security Analyst Exchange Program, titled "Combatting Targeted Disinformation Campaigns: A Whole of Society Issue," used military terminology to describe social media users as "threat actors" and discussed using a "disinformation kill chain" to counter their actions.
This militaristic approach to addressing online speech, combined with the acknowledgment of constitutional constraints on government regulation of content, highlights the tension between national security concerns and civil liberties protections.
Implications for Free Speech and Civil Liberties
The convergence of cybersecurity tactics and content moderation raises significant concerns for free speech and civil liberties:
- Treating citizens as "threat actors": The use of military and cybersecurity language to describe individuals expressing opinions online, even if controversial, sets a dangerous precedent.
- Lack of due process: Individuals targeted by these frameworks often have no recourse or ability to defend themselves before facing significant consequences, such as deplatforming.
- Chilling effect on speech: The threat of being labeled as a "disinformation spreader" and potentially targeted by powerful institutions could discourage individuals from expressing dissenting opinions.
- Blurring lines between protected speech and security threats: There's a risk of legitimate debate and protected speech being classified as security threats, justifying more invasive monitoring and intervention.
The Section 230 Debate
The paper from the DHS Analyst Exchange Program noted growing support for amending Section 230 of the Communications Decency Act, which currently provides liability protections for online platforms. Any changes to this law could have far-reaching implications for online speech and the operation of social media platforms.
Conclusion
The development and deployment of frameworks like DISARM, coupled with government involvement in defining and combating "disinformation," represent a significant shift in how online speech is monitored and regulated. While the intention to protect public health and democratic processes is laudable, the potential for overreach and infringement on civil liberties is substantial.
As we move forward, it is crucial to maintain clear distinctions between genuine cybersecurity threats and protected speech, no matter how controversial. Balancing public health concerns with the preservation of free speech and civil liberties will require careful consideration, robust public debate, and strong safeguards against government overreach.
The targeting of specific individuals based on flawed reports, the use of militaristic language to describe online speech, and the blurring of lines between cybersecurity and content moderation all point to a troubling trend. It is essential that we approach these issues with a commitment to protecting both public health and fundamental rights, ensuring that efforts to combat misinformation do not themselves become tools of suppression.
References
1. DISARM Foundation. "DISARM Framework."
2. Center for Countering Digital Hate. "The Disinformation Dozen." March 24, 2021. https://www.counterhate.com/
3. Bickert, Monika. "How We're Taking Action Against Vaccine Misinformation Superspreaders." Meta, August 18, 2021. https://about.fb.com/news/
4. Department of Homeland Security Analyst Exchange Program. "Combatting Targeted Disinformation Campaigns: A Whole of Society Issue." October 2019. https://dp.la/item/
Disqus