AIIR Lab Earns Best Resource Paper Award at JCDL 2025

The Artificial Intelligence and Information Retrieval (AIIR) Lab in the Department of Computer Science at the University of Southern Maine (USM) had a strong presence at the 2025 ACM/IEEE Joint Conference on Digital Libraries (JCDL 2025), presenting four research papers. Among them, the paper MAT-VB: Mathematical Text–Vision Benchmark was selected as the Best Resource Paper of the conference.

These projects were conducted at the AIIR Lab and involved a diverse group of researchers, including junior and senior undergraduate computer science students, graduate students, and high school interns. The recognition highlights the lab’s collaborative research environment and its commitment to mentoring students at all levels.

The accepted papers include:

  • MAT-VB: Mathematical Text–Vision Benchmark (Best Resource Paper)
    Authored by Behrooz Mansouri, Aidan Bell, Nicholas Largey, and Abigail Pitcairn, this resource paper introduces a new benchmark task for the AI era, focusing on the ability of Multimodal Large Language Models (MLLMs) to caption and interpret mathematical images. The work challenges state-of-the-art models on their understanding of complex mathematical visual content.
  • MathMex-V2: A Large Language Model–Enabled Math Search Engine
    This demo paper presents the latest version of the MathMex search engine (www.mathmex.com). Authored by Clayton Durepos, Ian McLaughlin, Connor Lund, Anthony Sienbenmorgen, Nicholas Largey, Abigail Pitcairn, and Behrooz Mansouri, the system incorporates new AI-driven features, including retrieval-augmented generation (RAG) models and advanced PDF reader tools.
  • From Speech to LaTeX: Large Language Models for Mathematical Accessibility in Digital Libraries
    Developed by Abigail Pitcairn, Clayton Durepos, Nicholas Largey, and Behrooz Mansouri, this work introduces a new dataset and models for studying how spoken mathematical expressions can be converted into LaTeX and leveraged in math-focused search engines, improving accessibility in digital libraries.
  • Multimodal Emotion Classification in Artwork: A Comparative Study Across Modalities
    In this study, Clayton Durepos, Abigail Pitcairn, and Behrooz Mansouri investigate the effectiveness of unimodal and multimodal AI models for emotion analysis in artwork. The results demonstrate that multimodal approaches can outperform unimodal models for this task.

These research efforts were supported by the National Science Foundation (NSF) and the Undergraduate Research Opportunities Program (UROP) at the University of Southern Maine.