|
|
ColMathAI: Color, Mathematics and Artificial Intelligence
September 11, 2025 17:00, Moscow
|
|
|
|
|
|
|
Robustness of Deep Learning Models
D. S. Korzhab a Skolkovo Institute of Science and Technology
b Artificial Intelligence Research Institute, Moscow
|
|
Abstract:
Deep learning models have recently become widespread. However, vulnerability to an inconspicuous adversarial attacks, sensitivity to natural noise and semantic disturbances, and increased risks of voice tampering are urgent issues, especially for high-risk applications such as medicine or biometrics.
The aim of the research is to develop new certifiable (provable) and empirical methods that ensure the stability, trustworthiness and privacy of models without significant restrictions on their application. The methodological framework includes the development of certification of image classifiers for semantic disturbance compositions and prototypical vector models for additive disturbances based on random smoothing and statistical methods, as well as the design of universal adversarial disturbances for speaker privacy (anonymization) and the development of voice antispoofing models.
Based on the results of the work, a new computational and analytical method is proposed for certifying the stability of image classifiers to a wide class of compositional transformations based on the Lipschitz analysis of the model relative to the perturbation parameters. Improved guarantees of certification of prototypal models have been obtained and, for the first time, for speaker identification tasks. A method of speech anonymization using an exponential overall-dispersion loss function and a new strategy for adding perturbations is also presented. Moreover, new synthetic speech detection architectures based on Kolmogorov-Arnold networks have been proposed.
Website:
https://color.iitp.ru/index.php/s/sBWprij9ARxDiHz
|
|