|
This article is cited in 2 scientific papers (total in 2 papers)
Topical issue
Attacks on machine learning models based on the PyTorch framework
T. M. Bidzhiev, D. E. Namiot Lomonosov Moscow State University
Abstract:
This research delves into the cybersecurity implications of neural network training in cloud-based services. Despite their recognition for solving IT problems, the resource-intensive nature of neural network training poses challenges, leading to increased reliance on cloud services. However, this dependence introduces new cybersecurity risks. The study focuses on a novel attack method exploiting neural network weights to discreetly distribute hidden malware. It explores seven embedding methods and four trigger types for malware activation. Additionally, the paper introduces an open-source framework automating code injection into neural network weight parameters, allowing researchers to investigate and counteract this emerging attack vector.
Keywords:
neural networks, malware, steganography, triggers.
Citation:
T. M. Bidzhiev, D. E. Namiot, “Attacks on machine learning models based on the PyTorch framework”, Avtomat. i Telemekh., 2024, no. 3, 38–50; Autom. Remote Control, 85:3 (2024), 263–271
Linking options:
https://www.mathnet.ru/eng/at16363 https://www.mathnet.ru/eng/at/y2024/i3/p38
|
|