Voice Models
Native â Text + Speech as Input and Text + Speech as Output;
WIP still, catching up with current open/commercial options and references to DIY
Current Cascading Approach: VAD â ASR â LLM â TTS
Has problems:
Interruptions
First word Latency
Cascading of errors from VAD/ASR
emotion, tone, and other speech features are lost
State of the Art:
Full Duplex Models (end to end continuous speech in/out):
Moshi.chat by Kyutai (pending open source/API release)
LSLM (Language Model Can Listen While Speaking) by ByteDance
Turn Based Models with Speech as Input:
Ultravox 0.4 by Fixie
Qwen2-Audio-Chat by Qwen Team, Alibaba (Whisper encoder to embed melspectrogram, then use it as prefix to LLM)
GPT-4o voice by OpenAI
Yet to read papers folder
People to follow ( X.com list)
Notes
Cascading Approach
minimum latency: 500ms on cloud via
VAD: silero-vad on edge/onnx
ASR: whisper via Groq
LLM: Llama via Groq
TTS: Sonic by Cartesia
Or Locally via modular HF pipeline
Ultravox â LLama3 + Whisper Encoder, plans on full duplex, API in beta
Gazelle -> Mistral 7b instruct + wav2vec2 + DPO, API in waitlist
The speech signals are encoded into discrete tokens, and then discrete speech tokens are expanded into the vocabulary of the LLM
LSLM architecture
Speech/Text to Speech/Text:
non dialogue tasks: LauraGPT, Viola, VoxtLM
Speech/Text to Text:
Encoder + Adapter style models: Following the manner of LLaVA, we leverage the well-trained speech modal encoder and the LLM, which makes LLaSM more resource-friendly. Specifically, we use Whisper as a speech encoder to encode the speech signals into embeddings. Then a modal adaptor learns to align speech embeddings with the input text embeddings of the large language model. The speech embeddings and the text embeddings are concatenated together to form interleaved sequences, then the interleaved sequences are input to the LLM for supervised fine-tuning. The training process is divided into two stages. In the first stage, we use the public ASR datasets for the modality adaptation pre-training. The speech encoder and the LLM are frozen, only the modal adaptor is trained to align the speech and text embeddings. As most of the model parameters remain frozen, only a small part of the parameters from the modal adaptor is trained during this stage, it is not resource-consuming. In the second stage, we use cross-modal instruction data for training to provide the model with the capacity to process cross-modal conversations and handle multi-modal instructions. The speech encoder is frozen while the parameters of the modal adaptor and the language model are updated for cross-modal instruction fine-tuning.
Qwen2-Audio-Chat (latest)
WavLLM by Msft SenseVoice (90 ms) Audio Flamingo (dialogue trained, tiny base model though)
GAMA Audio LTU-AS, Salmon LLaSM, COSMIC
VIsion Language Models
Palm-E: 540B PaLM + 22B Vision Transformer
LLaVA: pre-trained CLIP visual encoder + LLaMA and instruct tuning on GPT4-assisted visual instruction data.
BLIP-2: Flan-T5 with a Q-Former to align visual features with the LM
Cascading Models:
Parler-tts: high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation) and consistent voices.
Last updated