Yixin Tang1,2, Minqing Zhang1,2,3,*, Peizheng Lai1,2, Ya Yue1,2, Fuqiang Di1,2,*
CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5733-5750, 2025, DOI:10.32604/cmc.2025.064901
- 30 July 2025
Abstract Traditional steganography conceals information by modifying cover data, but steganalysis tools easily detect such alterations. While deep learning-based steganography often involves high training costs and complex deployment. Diffusion model-based methods face security vulnerabilities, particularly due to potential information leakage during generation. We propose a fixed neural network image steganography framework based on secure diffusion models to address these challenges. Unlike conventional approaches, our method minimizes cover modifications through neural network optimization, achieving superior steganographic performance in human visual perception and computer vision analyses. The cover images are generated in an anime style using state-of-the-art diffusion More >