|
Эта публикация цитируется в 2 научных статьях (всего в 2 статьях)
ОБРАБОТКА ИЗОБРАЖЕНИЙ, РАСПОЗНАВАНИЕ ОБРАЗОВ
Mutual modality learning for video action classification
S. A. Komkovab, M. D. Dzabraevab, A. A. Petiushkoab a Lomonosov Moscow State University
b Huawei Moscow Research Center, 121099, Russia, Moscow, Smolenskaya ploshchad 7–9
Аннотация:
The construction of models for video action classification progresses rapidly. However, the performance of those models can still be easily improved by ensembling with the same models trained on different modalities (e.g. Optical flow). Unfortunately, it is computationally expensive to use several modalities during inference. Recent works examine the ways to integrate advantages of multi-modality into a single RGB-model. Yet, there is still room for improvement. In this paper, we explore various methods to embed the ensemble power into a single model. We show that proper initialization, as well as mutual modality learning, enhances single-modality models. As a result, we achieve state-of-the-art results in the Something-Something-v2 benchmark.
Ключевые слова:
video recognition, video action classification, video labeling, mutual learning, optical flow
Поступила в редакцию: 13.01.2023 Принята в печать: 29.03.2023
Образец цитирования:
S. A. Komkov, M. D. Dzabraev, A. A. Petiushko, “Mutual modality learning for video action classification”, Компьютерная оптика, 47:4 (2023), 637–649
Образцы ссылок на эту страницу:
https://www.mathnet.ru/rus/co1165 https://www.mathnet.ru/rus/co/v47/i4/p637
|
Статистика просмотров: |
Страница аннотации: | 82 | PDF полного текста: | 16 | Список литературы: | 10 |
|