IEEE Computer magazine welcomes papers that examine how to detect and counteract disinformation and misinformation. Our goal is to identify and distill patterns and anti-patterns from which to learn, both as practitioners and as researchers.
The rise of Generative AI has revolutionized the way we work, learn, and communicate. From code synthesis to language translation, generative tools promise efficiency and creative support. Yet, there is also a growing threat with generating and spreading disinformation and misinformation. For software engineers and computer scientists, this represents not just a technical challenge, but a profound ethical responsibility. A few examples illustrate how disinformation and misinformation are fueled by IT systems, and how the software community can counteract.
A secretive network of around 3,000 “ghost” accounts on GitHub has been manipulating pages on the code-hosting website to promote malware and phishing links. Cybercriminals created fake forks of legitimate repositories. They injected malicious runtime code that, upon import, executed shell commands fetched from external URLs on project initialization. This fake repository operated like trusted software but delivered hidden malware. Code that looks safe may contain logic bombs, backdoors, or spyware. Countermeasures include verifying repository integrity such as commit, use static code analysis tools for any code being downloaded, and performing enhanced sandboxed execution before deploying or sharing code.
Attackers are increasingly cloning senior managers’ voice via deepfake technology to spread disinformation in companies and across. A popular misuse is creating a sense of urgency that money must be transferred to a potential client or partner to receive a contract. A well-phrased hallucination might sound more believable than an awkward but accurate human answer. Countermeasures include implementing biometric voice anomaly detection, demanding multi-channel authentication workflows, and monitoring transaction behavior for signs of impersonation fraud. A remedy against a variety of fraud and cyber-attacks is to demand secure payment channels and block crypto-currencies.
Attackers use AI to erode public trust, attack democracies, and launch social engineering attacks. Facebook was exploited to spread anti-Rohingya hate speech and disinformation, fueling real-world ethnic violence. The platform’s lack of local language moderation let dangerous content spread unchecked. UN and Amnesty reports show that algorithmic amplification played a "determining role" in incitement to violence, deepening societal divisions in Myanmar. Due to lack of supervision, both human and algorithmic, it had become an echo chamber of anti-Rohingya content. Resolutions are to develop culturally aware content classifiers, deploy local-language content analysis, provide fast and effective feedback tools for fake, and ban misinformation networks across platforms.
Scientific research is severely impacted. Across disciplines thousands of AI-generated fraudulent papers are annually produced by paper agents. Even new “scientific” journals are created with the sole mission to disseminate disinformation. Scientific papers must be trustworthy. Without applied due diligence, the reputation of any science will diminish with several consequences on the advance of mankind. Countermeasures include the use of metadata and citation-anomaly detectors, deploy tools like SciDetect to flag suspicious submissions, enforce cryptographic authorship verification and demand a chain of trust for submissions similar to blockchain algorithms.
Each case suggests specific countermeasures. But the current climate of cost saving and need for innovation prioritizes speed over safety. Companies frequently push for rapid deployment. OpenAI’s early models, for example, generated convincingly human-like text, but including racist, misleading, or completely wrong content. Unlike previous sources of misinformation, such as rumors, AI tools can create plausible-sounding fraud at scale. Worse, they disseminate with the naïve support of software experts. The risk for our society is not simply misinformation. It's that people lose trust at all.
This is not just a cybersecurity problem. It’s a software engineering problem. If we write code that relies on AI outputs, we must trace every assumption and test every dependency. As with classical verification and validation, developers must think about the right side of the "V". Are our systems doing what they should—and nothing more? Are our tools trustworthy? That means more responsibility of software companies, and a need for liability. Software engineers have a historic role to play. Not just to build. But to protect. To verify. To tell the difference between truth and illusion, and to build systems that help others do the same.
Software engineers must act not only as creators, but as validators. Every line of AI-generated code or text must be critically examined. AI should assist, not replace, human review. Statistical analysis and cross-referencing can help identify fake content. Watermarks, blockchain provenance, and even embedding cryptographic signatures are emerging responses. Yet all of these are only partially effective if users themselves remain unaware or indifferent.
As computer professionals, we must build systems that detect disinformation, not inadvertently enable it, and help restore trust in digital communications. Each mentioned case demonstrates the evolving threat landscape. Computer experts and software engineers must therefore build tools and processes that detect disinformation, enforce provenance, and ensure trust in digital systems.
We invite submissions covering any aspect of how to translate research into software products including, but not limited to:
The special issue will concern practice and will target real cases that can be good examples with which to inspire others. We, therefore, discourage submissions presenting only theoretical models of technology transfer without substantial evidence and examples of its application in practice.
Submission Guidelines:
For author information and guidelines on submission criteria, visit the Author’s Information Page. Please submit papers through the IEEE Author Portal and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts.
Submissions for the theme issue must not exceed 6000 words, including figures and tables, which count as 200 words each. Submissions beyond these limits may be rejected without refereeing. Papers within the theme and scope will be peer-reviewed and are subject to editing for magazine style, clarity, organization, and space. We reserve the right to edit the title of all submissions. Be sure to include the name of the special issue to which you are submitting.
Papers should be written in a style accessible to practitioners. Overly complex, purely research-oriented, or theoretical treatments are not appropriate. Papers should be innovative. IEEE Computer does not republish material published previously in other venues, including other periodicals and formal conference/workshop proceedings, whether previous publication was in print or in electronic form.
In addition to submitting your paper to Computer, you are also encouraged to upload the data related to your paper to IEEE DataPort. IEEE DataPort is IEEE's data platform that supports the storage and publishing of datasets while also providing access to thousands of research datasets. Uploading your dataset to IEEE DataPort will strengthen your paper and will support research reproducibility. Your paper and the dataset can be linked, providing a good opportunity for you to increase the number of citations you receive. Data can be uploaded to IEEE DataPort prior to submitting your paper or concurrent with the paper submission. Thank you!