• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Co
  • Home
  • / ...
  • /Magazines
  • /Co

Call For Papers: Special Issue on Fighting Fake: Disinformation and Misinformation

Important Dates

  • Submissions deadline: 20 November 2025
  • Publication: July 2026

IEEE Computer magazine welcomes papers that examine how to detect and counteract disinformation and misinformation. Our goal is to identify and distill patterns and anti-patterns from which to learn, both as practitioners and as researchers. 

The rise of Generative AI has revolutionized the way we work, learn, and communicate. From code synthesis to language translation, generative tools promise efficiency and creative support. Yet, there is also a growing threat with generating and spreading disinformation and misinformation. For software engineers and computer scientists, this represents not just a technical challenge, but a profound ethical responsibility. A few examples illustrate how disinformation and misinformation are fueled by IT systems, and how the software community can counteract.

A secretive network of around 3,000 “ghost” accounts on GitHub has been manipulating pages on the code-hosting website to promote malware and phishing links. Cybercriminals created fake forks of legitimate repositories. They injected malicious runtime code that, upon import, executed shell commands fetched from external URLs on project initialization. This fake repository operated like trusted software but delivered hidden malware. Code that looks safe may contain logic bombs, backdoors, or spyware. Countermeasures include verifying repository integrity such as commit, use static code analysis tools for any code being downloaded, and performing enhanced sandboxed execution before deploying or sharing code.

Attackers are increasingly cloning senior managers’ voice via deepfake technology to spread disinformation in companies and across. A popular misuse is creating a sense of urgency that money must be transferred to a potential client or partner to receive a contract. A well-phrased hallucination might sound more believable than an awkward but accurate human answer. Countermeasures include implementing biometric voice anomaly detection, demanding multi-channel authentication workflows, and monitoring transaction behavior for signs of impersonation fraud. A remedy against a variety of fraud and cyber-attacks is to demand secure payment channels and block crypto-currencies.

Attackers use AI to erode public trust, attack democracies, and launch social engineering attacks. Facebook was exploited to spread anti-Rohingya hate speech and disinformation, fueling real-world ethnic violence. The platform’s lack of local language moderation let dangerous content spread unchecked. UN and Amnesty reports show that algorithmic amplification played a "determining role" in incitement to violence, deepening societal divisions in Myanmar. Due to lack of supervision, both human and algorithmic, it had become an echo chamber of anti-Rohingya content. Resolutions are to develop culturally aware content classifiers, deploy local-language content analysis, provide fast and effective feedback tools for fake, and ban misinformation networks across platforms.

Scientific research is severely impacted. Across disciplines thousands of AI-generated fraudulent papers are annually produced by paper agents. Even new “scientific” journals are created with the sole mission to disseminate disinformation. Scientific papers must be trustworthy. Without applied due diligence, the reputation of any science will diminish with several consequences on the advance of mankind. Countermeasures include the use of metadata and citation-anomaly detectors, deploy tools like SciDetect to flag suspicious submissions, enforce cryptographic authorship verification and demand a chain of trust for submissions similar to blockchain algorithms.

Each case suggests specific countermeasures. But the current climate of cost saving and need for innovation prioritizes speed over safety. Companies frequently push for rapid deployment. OpenAI’s early models, for example, generated convincingly human-like text, but including racist, misleading, or completely wrong content. Unlike previous sources of misinformation, such as rumors, AI tools can create plausible-sounding fraud at scale. Worse, they disseminate with the naïve support of software experts. The risk for our society is not simply misinformation. It's that people lose trust at all. 

This is not just a cybersecurity problem. It’s a software engineering problem. If we write code that relies on AI outputs, we must trace every assumption and test every dependency. As with classical verification and validation, developers must think about the right side of the "V". Are our systems doing what they should—and nothing more? Are our tools trustworthy? That means more responsibility of software companies, and a need for liability. Software engineers have a historic role to play. Not just to build. But to protect. To verify. To tell the difference between truth and illusion, and to build systems that help others do the same.

Software engineers must act not only as creators, but as validators. Every line of AI-generated code or text must be critically examined. AI should assist, not replace, human review. Statistical analysis and cross-referencing can help identify fake content. Watermarks, blockchain provenance, and even embedding cryptographic signatures are emerging responses. Yet all of these are only partially effective if users themselves remain unaware or indifferent.

As computer professionals, we must build systems that detect disinformation, not inadvertently enable it, and help restore trust in digital communications. Each mentioned case demonstrates the evolving threat landscape. Computer experts and software engineers must therefore build tools and processes that detect disinformation, enforce provenance, and ensure trust in digital systems.

We invite submissions covering any aspect of how to translate research into software products including, but not limited to: 

  • AI-based detection of misinformation and disinformation
  • Forensic methods to analyze artifacts, such as neural network fingerprints, inconsistencies, metadata anomalies
  • Content provenance tracking, such as watermarking, cryptographic anchoring, and blockchain mechanisms to ensure traceable origin and authenticity.
  • Multi-modal validation, such as combined voice, image, and text analysis.
  • Detection frameworks to create libraries of shared datasets and detection models that allow early detection and fast countermeasures of fake content.
  • Education and awareness, such as being prepared to verify messages and voices before acting, especially in high-risk environments.
  • Best practices and lessons learned from other engineering disciplines.
  • Case studies that show how to harden computer systems against misinformation and disinformation.

The special issue will concern practice and will target real cases that can be good examples with which to inspire others. We, therefore, discourage submissions presenting only theoretical models of technology transfer without substantial evidence and examples of its application in practice.


Submission Guidelines:

For author information and guidelines on submission criteria, visit the Author’s Information Page. Please submit papers through the IEEE Author Portal and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts.

Submissions for the theme issue must not exceed 6000 words, including figures and tables, which count as 200 words each. Submissions beyond these limits may be rejected without refereeing. Papers within the theme and scope will be peer-reviewed and are subject to editing for magazine style, clarity, organization, and space. We reserve the right to edit the title of all submissions. Be sure to include the name of the special issue to which you are submitting. 

Papers should be written in a style accessible to practitioners. Overly complex, purely research-oriented, or theoretical treatments are not appropriate. Papers should be innovative. IEEE Computer does not republish material published previously in other venues, including other periodicals and formal conference/workshop proceedings, whether previous publication was in print or in electronic form.

In addition to submitting your paper to Computer, you are also encouraged to upload the data related to your paper to IEEE DataPort. IEEE DataPort is IEEE's data platform that supports the storage and publishing of datasets while also providing access to thousands of research datasets. Uploading your dataset to IEEE DataPort will strengthen your paper and will support research reproducibility. Your paper and the dataset can be linked, providing a good opportunity for you to increase the number of citations you receive. Data can be uploaded to IEEE DataPort prior to submitting your paper or concurrent with the paper submission. Thank you!


Guest Editors

  • Christof Ebert, Vector Consulting Services, Germany
  • Jeffrey M. Voas, U.S. National Institute of Standards and Technology (NIST), USA
  • Priyanka Nawalramka, HouseCanary, USA
LATEST NEWS
How to Evaluate LLMs and GenAI Workflows Holistically
How to Evaluate LLMs and GenAI Workflows Holistically
The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent
The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent
Exploring the Elegance and Applications of Complexity and Learning in Computer Science
Exploring the Elegance and Applications of Complexity and Learning in Computer Science
IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award
IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award
IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol
IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol
Read Next

How to Evaluate LLMs and GenAI Workflows Holistically

The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent

Exploring the Elegance and Applications of Complexity and Learning in Computer Science

IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award

IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol

Celebrate IEEE Day 2025 with the IEEE Computer Society

Building Community Through Technology: Sardar Patel Institute of Technology (SPIT) Student Chapter Report

IEEE CS and ACM Announce Recipients of 2025 George Michael Memorial HPC Fellowship

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter