ELSA Icon ELSA: Acoustic Event-Level Semantic Alignment for Fine-Grained Reference-Free Text-to-Audio Evaluation

Under Review
Anonymous Interspeech submission
The paper is currently under review. The links will be made available after publication.
Teaser Image showing ELSA concept

Abstract

Text-to-audio (TTA) generation, synthesizing audio from natural language, has been widely studied for its ability to capture precise user intent. To effectively advance TTA models, it is essential to reliably evaluate generated audio without relying on costly human subjective ratings, motivating the development of automatic evaluation metrics that correlate well with human judgments. While recent CLAP-based metrics provide practical reference-free solutions, their coarse-grained text–audio similarity matching often correlates poorly with human ratings.

To address this, we propose ELSA, a reference-free evaluation metric for fine-grained text–audio alignment. ELSA decomposes generated audio guided by distinct acoustic events derived from the text query and assesses event-level alignment. Experiments across four TTA benchmarks show that ELSA reveals a higher correlation with human subjective ratings than prior metrics, highlighting its effectiveness for reliable TTA evaluation.

Method

ELSA Model Architecture

ELSA hierarchically evaluates global text–audio matching and fine-grained acoustic-event alignment by combining shared text–audio embeddings with event-level representations extracted via a text parser and a language-queried audio source separation model.

Qualitative Results

Input Text
"Strong blizzard wind in the background with a bell ringing followed by animal footsteps."
Input Audio
Text branch
OpenAI GPT-5.2
Animal footsteps A bell ringing Blizzard wind blowing
Audio branch
Meta SAMaudio
Separated Tracks
Event 01Animal footsteps
Event 02A bell ringing
Event 03Blizzard wind blowing

ELSA Predicted Score

0.54

Quantitative Results

Analysis

Citation


To be appeared.