DIME: An Online Tool for the Visual Comparison of Cross-modal Retrieval Models

TitleDIME: An Online Tool for the Visual Comparison of Cross-modal Retrieval Models
Publication TypeConference Paper
Year of Publication2020
AuthorsZhao, T., Choi J., & Friedland G.
Published inInternational Conference on Multimedia Modeling
Page(s)729-733
Date Published01/2020
PublisherSpringer, Cham
Abstract

Cross-modal retrieval relies on accurate models to retrieve relevant results for queries across modalities such as image, text, and video. In this paper, we build upon previous work by tackling the difficulty of evaluating models both quantitatively and qualitatively quickly. We present DIME (Dataset, Index, Model, Embedding), a modality-agnostic tool that handles multimodal datasets, trained models, and data preprocessors to support straightforward model comparison with a web browser graphical user interface. DIME inherently supports building modality-agnostic queryable indexes and extraction of relevant feature embeddings, and thus effectively doubles as an efficient cross-modal tool to explore and search through datasets.

URLhttps://link.springer.com/chapter/10.1007/978-3-030-37734-2_61