• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Co
  • Home
  • / ...
  • /Magazines
  • /Co

Call For Papers: Special Issue on AI Governance and Compliance

Computer seeks submissions for this upcoming special issue.

Important Dates

  • Submission Deadline: 13 April 2026
  • Publication: December 2026

A governance system can be understood as the full set of institutional arrangements, including rules and agents who create them, that regulate transactions within and across the boundaries of economic systems (Hollingsworth, Schmitter & Streeck, 1994). These arrangements encompass both state and non-state organizations and operate through formal and informal rules, norms, and beliefs. In the context of artificial intelligence (AI), governance systems determine how data is collected, shared, and safeguarded, how algorithms are trained and deployed, and how accountability is ensured (Shin & Ahmad, 2025). Effective AI governance is therefore critical to balancing innovation with ethical, legal, and social considerations.

New AI-related regulative institutions are rapidly expanding to address these concerns. Some focus narrowly on specific activities, such as employment, while others provide comprehensive frameworks covering the full spectrum of AI use. For instance, in April 2023, New York City introduced definitive guidelines governing the use of automated employment decision tools in hiring and promotion (Paretti, Ray, Freedberg & McPike, 2023). At the same time, broader regulatory initiatives are underway in major jurisdictions including China, the EU, Japan, the U.K., and the U.S., each seeking to establish rules that ensure AI systems are safe, transparent, and accountable.

In many cases, normative frameworks and rules have emerged to fill regulatory gaps, especially where formal agencies remain underdeveloped or absent (Kshetri, 2024). These mechanisms are typically prescriptive rather than coercive, guiding behavior without the force of law. Such institutions include voluntary guidelines and codes of conduct, technical standards, and certification programs, all of which provide structure and accountability in the absence of comprehensive regulation.

As AI technologies rapidly expand across sectors such as healthcare, finance, education, defense, and government, the need to safeguard responsible use, transparency, and accountability has become more pressing than ever. Yet, despite growing recognition of these challenges, governance mechanisms remain at an early stage of development. Regulatory and oversight frameworks often lag behind technological advances, leaving ethical, legal, and operational blind spots that can undermine trust and exacerbate risks.

This special issue of IEEE Computer will address the global conversation around AI governance and compliance. The issue aims to bring together interdisciplinary voices from policy, academia, industry, and civil society to explore strategies for regulating, auditing, and governing AI systems to ensure alignment with human values, social norms, and legal expectations.

AI governance is not only a matter of technical risk management but also of societal trust and democratic accountability (Floridi et al., 2018; Mittelstadt, 2019; Shin et. al, 2024). This issue will spotlight global efforts to develop comprehensive regulatory frameworks such as the EU AI Act (European Commission, 2021), the NIST AI Risk Management Framework (NIST, 2023), and efforts by organizations like OECD, ISO/IEC, and IEEE Standards Association.

We invite high-quality, original contributions that explore topics including, but not limited to the following:

  • Comparative regulatory frameworks for AI
    • e.g., EU AI Act, U.S. Blueprint for an AI Bill of Rights, China’s AI governance initiatives, UAE’s AI ethics guidelines.
  • AI risk management and compliance engineering
    • Strategies for operationalizing frameworks like the NIST AI RMF; internal governance audits and lifecycle assurance.
  • Responsible AI implementation in practice
    • Sectoral case studies demonstrating organizational compliance programs and ethical deployment mechanisms.
  • Algorithmic transparency and explainability
    • Technical and policy mechanisms for model interpretability in high-risk domains 
  • Data governance and privacy regulations
    • Navigating compliance with GDPR, CCPA, and other cross-border data protection laws; consent, data sovereignty, and digital identity.
  • Bias detection and mitigation
    • Evaluation methods and interventions to reduce algorithmic harm, especially in high-stakes applications.
  • Human-in-the-loop systems and oversight
    • Design implications of shared accountability between humans and machines.
  • Ethics committees and AI oversight boards
    • Institutional innovations for decision review, stakeholder engagement, and social acceptability of AI tools.
  • Compliance automation and AI governance toolkits
    • Tools, standards, and infrastructures enabling “compliance by design” approaches.
  • Normative frameworks and rules for AI governance
    • Voluntary guidelines and codes of conduct, technical standards, and certification programs.
  • Low- and middle-income countries and governance capacity
    • Inclusion challenges in global regulatory development and digital colonialism risks.
  • Future-forward discussions
    • Anticipatory regulation and scenario analysis for foundation models, generative AI, and autonomous systems.

Submission Guidelines

For author information and guidelines on submission criteria, visit the Author’s Information Page. Please submit papers through the IEEE Author Portal and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts.

In addition to submitting your paper to Computer, you are also encouraged to upload the data related to your paper to IEEE DataPort. IEEE DataPort is IEEE's data platform that supports the storage and publishing of datasets while also providing access to thousands of research datasets. Uploading your dataset to IEEE DataPort will strengthen your paper and will support research reproducibility. Your paper and the dataset can be linked, providing a good opportunity for you to increase the number of citations you receive. Data can be uploaded to IEEE DataPort prior to submitting your paper or concurrent with the paper submission. Thank you!


Questions?

Contact the guest editors at:

  • Nir Kshetri, University of North Carolina at Greensboro
  • Norita Ahmad, American University of Sharjah, United Arab Emirates
LATEST NEWS
How to Evaluate LLMs and GenAI Workflows Holistically
How to Evaluate LLMs and GenAI Workflows Holistically
The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent
The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent
Exploring the Elegance and Applications of Complexity and Learning in Computer Science
Exploring the Elegance and Applications of Complexity and Learning in Computer Science
IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award
IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award
IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol
IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol
Read Next

How to Evaluate LLMs and GenAI Workflows Holistically

The Kill Switch of Vengeance: The Double-Edged Sword of Software Engineering Talent

Exploring the Elegance and Applications of Complexity and Learning in Computer Science

IEEE CS and ACM Honor Saman Amarasinghe with 2025 Ken Kennedy Award

IEEE Std 3221.01-2025: IEEE Standard for Blockchain Interoperability—Cross Chain Transaction Consistency Protocol

Celebrate IEEE Day 2025 with the IEEE Computer Society

Building Community Through Technology: Sardar Patel Institute of Technology (SPIT) Student Chapter Report

IEEE CS and ACM Announce Recipients of 2025 George Michael Memorial HPC Fellowship

FacebookTwitterLinkedInInstagramYoutube
Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter