ADE Extraction at EACL21
Thanks to our continuous research on Adverse Drug Event Extraction, we will be at EACL 2021 (19th – 23rd April, 2021) with our latest paper: “BERT Prescriptions to Avoid Unwanted Headaches: A Comparison of Transformer Architectures for Adverse Drug Event Detection“ We explore the capabilities of wide variety of BERT-based architectures on the task of ADE extraction from social media texts.In particular we focus on the use of in-domain knowledge during pretraining, answering the question on whether (and which extent) it can actually help in this scenario, and giving some useful “prescriptions” for future research on this field. Research Group– Beatrice Portelli (AILAB-Udine) – Edoardo Lenzi (AILAB-Udine) – Simone Scaboro (AILAB-Udine) – Giuseppe Serra (AILAB-Udine) – Emmanuele Chersoni (Hong Kong Polytechnic University) – Enrico Santus (Bayer) Featured pages– ADE extraction project at AILAB-Udine– ADE at AILAB-Udine: top results on SMM4H’19
Adverse Drug Event extraction: top results on SMM4H’19 Shared Task
Pharmacovigilance monitors the drugs in the market to ensure that unexpected effects (Adverse Drug Events or ADEs) are immediately identified and actions are taken to minimize their harm. Patients have started reporting such ADEs on social media, health forums and similar outlets instead of using formal reporting methods. Given the need to monitor these sources for pharmacovigilance purposes, systems for the automatic extraction of ADE are becoming an important research topic in the NLP community and recent Shared Tasks on the topic of ADE extraction have attracted numerous focused contributions. Our research group has been working on an architecture for automatic ADE extraction from social media texts, with a focus on maintaining high performances on different text typologies (from short and noisy tweets to long and more formal medical forum posts). Our latest experiments lead us to reach the top of the leaderboard in one of the most relevant and active Shared Tasks in this field: SMM4H’19 (Social Media Mining […]
Data augmentation techniques for the Video Question Answering task
Video Question Answering (VideoQA) is a task that requires a model to analyze and understand both the visual content given by the input video and the textual part given by the question, and the interaction between them in order to produce a meaningful answer. In our work we focus on the Egocentric VideoQA task, which exploits first-person videos, because of the importance of such task which can have impact on many different fields, such as those pertaining the social assistance and the industrial training. Recently, an Egocentric VideoQA dataset, called EgoVQA, has been released. Given its small size, models tend to overfit quickly. To alleviate this problem, we propose several augmentation techniques which give us a +5.5% improvement on the final accuracy over the considered baseline.
Marcella Cornia – Giovani ricercatori 2020 Award
Congratulations to the PhD Marcella Cornia for being awarded the Young Researchers Arward for Artificial Intelligence and Computer Vision, obtained thanks to her paper “Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model”, written in collaboration with Professor Giuseppe Serra (AILAB-Udine). Featured articles: – Young Researchers Award 2020 – Marcella Cornia’s achievement: link 1, link 2 – Marcella’s collaboration with the AILAB-Udine – The winning paper