Security & Privacy-Call for Papers Special Issue on AI Model Security and Privacy in AIoT Environments (IF=1.9)

SPY

Computer Networks & Wireless Communication Computer Security & Cryptography Artificial Intelligence





Special Issue on AI Model Security and Privacy in AIoT Environments



Introduction



In Artificial Intelligence of Things (AIoT) environments, the collection and analysis of data are crucial for the normal operation of systems. However, this data often encompasses personal privacy, trade secrets, and critical infrastructure security, rendering it a prime target for attackers. Attacks on AIoT systems typically encompass adversarial example attacks, data poisoning, model backdoors, and various other forms, all of which can culminate in system failures, data theft, or manipulation, thus engendering fairness and bias issues in models and posing security threats to systems and users. For example, in the field of financial technology (FinTech), biometric identification technologies such as fingerprint recognition, facial recognition, and even voiceprint recognition are often used at the perception layer to authenticate users. IoT devices transmit user information to AI cores located in the cloud or at the edge for computation in the data analytics layer. If AI models are attacked, it may result in users being unable to complete identity verification smoothly or even allow unauthorized individuals to easily impersonate them and gain access to the system, leading to financial losses and other damages. In some public domains of open information, such as air pollution and water quality monitoring services, AIoT is often utilized as the primary technology. However, some malicious actors may infiltrate the AIoT perception layer and replace genuine collected pollution data with generated false samples. Moreover, they can influence the judgment of AI models by emitting small amounts of gas or liquid, making it difficult for government agencies to prosecute them.  Furthermore, models can be stolen or reverse-engineered, enabling attackers to effortlessly fabricate false systems to deceive users. Faced with numerous threats, the lack of interpretability in native AI poses a significant concern for our generation, which heavily relies on AI. The development and proliferation of AIoT technology offer boundless opportunities but simultaneously introduce new security challenges. Only through continuous research and innovation can effective methods be found to protect the security, reliability, and privacy of AI in AIoT environments. Therefore, this special issue focuses on enhancing the security of AI models within AIoT environments. We invite research contributions focusing on innovative solutions and strategies tailored to address the escalating security challenges associated with AI model vulnerabilities in AIoT environments.



 



Topics of interest include, but are not limited to:




  • Cyber-physical Systems Privacy and Security

  • Mechanism to prevent AI model failures and incorrect authentication in AIoT scenarios with biometric recognition requirements.

  • Enhancement of model resilience to adversarial samples in AIoT within the public utility sector.

  • Design of novel attacks against models in AIoT.

  • Identify poison samples in AIoT and prevent data poisoning attacks from causing model learning failures.

  • A solution to address the imbalance in model training caused by non-identically and independently distributed (Non-IID) real AIoT data.

  • Security challenges of federated learning (FL) in AIoT.

  • Design and defense mechanisms against membership inference attacks on models in AIoT.

  • Explainability of models in AIoT.



Papers recycled from those accepted at conferences cannot be considered for publication as SI in the journal. However, extended versions of papers accepted at conferences may be submitted to SIs of the journal under the condition that at least 33% new material is incorporated into the journal version. Additionally, these papers must be clearly identified by the authors at the time of submission, and a detailed explanation of the extensions made must be provided in the cover letter accompanying the submission.



 



Guest Editors:



Han-Chieh Chao (Lead Guest Editor)



Distinguished Chair Professor

Department of Artificial Intelligence

Tamkang University, Taiwan



Hsin-Hung Cho



Associate Professor

Department of Computer Science and Information Engineering

National Ilan University, Taiwan



Sherali Zeadally



Professor

University of Kentucky, USA



Reza Malekian



Professor

Department of Computer Science and Media Technology

Malmö University, Sweden



Submission Guidelines/Instructions



We welcome the novel, unpublished and state-of-the-art research manuscript submissions which do not remain under consideration in any other journal. Submission to this special issue should be made only on the SPY journal's online manuscript submission portal and in the submission process authors are instructed to select the manuscript type as “Emerging Threats to Security and Privacy and Innovative Countermeasures". Paper submissions must confirm to the layout and format guidelines in the SPY journal. Instructions for Contributors are in: https://onlinelibrary.wiley.com/page/journal/24756725/homepage/forauthors.html



Tentative Deadlines:

Submission portal opens: Sep 15, 2024

Deadline for paper submission: Feb 15, 2025

Notification of Decision: June 1, 2025

Tentative Publication Date: Late 2025