Submission Deadline: 30 November 2026 View: 69 Submit to Special Issue
Assist. Prof. Olusola Odeyomi
Email: otodeyomi@ncat.edu
Affiliation: Department of Computer Science, North Carolina Agricultural and Technical State University, North Carolina, United States
Research Interests: federated learning, differential privacy, game theory, multi-agent systems, online optimization
1. Issue Introduction
The success of machine learning algorithms in today's world can largely be attributed to the ease with which they can be optimized. Over the years, first-order optimization techniques have emerged as the de facto workhorse of machine learning. With the rapid growth of big data and artificial intelligence, it is increasingly important to explore new optimization methods that are both scalable and computationally efficient. Moreover, as internet-of-things devices proliferate amid escalating privacy breaches and evolving cybersecurity threats, it is critical for machine learning algorithms to provide strong privacy guarantees and demonstrate robustness against such attacks.
2. Aim and Scope
Therefore, this special issue focuses on robust and privacy-preserving optimization techniques for artificial intelligence, big data, and Internet-of-Things devices. We invite original contributions on the theory, algorithms, and applications of robust and privacy-preserving machine learning optimization.
3. Suggested Themes
· Supervised, Unsupervised, and Reinforcement Learning – Robust loss functions, privacy-preserving training, and sample-efficient reinforcement learning under adversarial or safety-critical conditions.
· Deep Learning Algorithms – Scalable and memory-efficient optimizers, adversarial robustness, and generalization theory for overparameterized neural networks.
· Evolutionary Algorithms – Black-box robust optimization, multi-objective evolution under privacy constraints, and hybrid evolutionary-gradient methods.
· Differential Privacy – Private stochastic optimization, convergence guarantees for non-convex objectives, and composition-friendly private optimizers.
· Online Optimization and Multi-Armed Bandits – Adversarially robust online learning, privacy-preserving bandits, and non-stationary environments for IoT applications.
· Federated Learning – Heterogeneity-aware optimization, secure aggregation, Byzantine resilience, and communication-efficient training.
· Convex and Non-Convex Optimization – Scalable convex solvers, saddle-point avoidance, zeroth-order methods, and bilevel optimization with robustness guarantees.
· Multi-Agent Systems – Decentralized optimization with privacy, game-theoretic robust learning, and distributed coordination for swarm robotics and IoT.


Submit a Paper
Propose a Special lssue