Data augmentation techniques for the Video Question Answering task

Video Question Answering (VideoQA) is a task that requires a model to analyze and understand both the visual content given by the input video and the textual part given by the question, and the interaction between them in order to produce a meaningful answer. In our work we focus on the Egocentric VideoQA task, which exploits first-person videos, because of the importance of such task which can have impact on many different fields, such as those pertaining the social assistance and the industrial training. Recently, an Egocentric VideoQA dataset, called EgoVQA, has been released. Given its small size, models tend to overfit quickly. To alleviate this problem, we propose several augmentation techniques which give us a +5.5% improvement on the final accuracy over the considered baseline.


Lecture 7th May 2020 – Beyond Hand-Crafted Networks:
Neural Architecture Search

Beyond Hand-Crafted Networks: Neural Architecture Search Dott. Stefano Alletto ´╗┐ Online lecture – Thursday, 07 May 2020, 08:30 a.m. (GMT+1) With the performance on several benchmarks approaching saturation, pushing the state of the art is often a tedious process of hyperparameter tuning and network architecture optimization. Finding the perfect neural network for a given task by hand is often impossible due to time constraints, but what if we could design a system capable of automatically designing architectures, test their performance and improve itself by looking at its previous mistakes?This is the goal of neural architecture search (NAS): an automated system that explores search spaces which size is beyond human capabilities, samples network structures from them and improves its decision making by using the performance of the architectures it finds as supervision. In this talk, after introducing this task more in detail, I will be giving an overview of recent NAS approaches, discussing the opportunities and limitations in the field. Finally, […]