cff-version: 1.2.0 abstract: "

Many real-world applications, from sport analysis to surveillance, benefit from automatic long-term action recognition. In the current deep learning paradigm for automatic action recognition, it is imperative that models are trained and tested on datasets and tasks that evaluate if such models actually learn and reason over long-term information. In this work, we propose a method to evaluate how suitable a video dataset is to evaluate models for long-term action recognition. To this end, we define a long-term action as excluding all the videos that can be correctly recognized using solely short-term information. We test this definition on existing long-term classification tasks on three popular real-world datasets, namely Breakfast, CrossTask and LVU, to determine if these datasets are truly evaluating long-term recognition. Our method involves conducting user studies, where we ask humans to annotate videos from these datasets. Our study reveals that these datasets can be effectively solved using shortcuts based on short-term information. In this repository, we provide the code and data. The code includes the HTML files for the user studies and the data analysis. The data includes the input to the user studies (e.g., video urls) and the responses collected on Amazon Mechanical Turk.

" authors: - family-names: Strafforello given-names: Ombretta - family-names: van Gemert given-names: Jan orcid: "https://orcid.org/0000-0002-6913-0482" - family-names: Schutte given-names: Klamer orcid: "https://orcid.org/0000-0002-9954-0685" title: "Code underlying the publication: "Are current long-term video understanding datasets long-term?"" keywords: version: 1 identifiers: - type: doi value: 10.4121/6bdd2eda-9f73-430c-9e60-685e810d6333.v1 license: CC0 date-released: 2024-05-24