Voice Models
Native â Text + Speech as Input and Text + Speech as Output;
Last updated
Was this helpful?
Native â Text + Speech as Input and Text + Speech as Output;
Last updated
Was this helpful?
Current Cascading Approach: VAD â ASR â LLM â TTS
Has problems:
Interruptions
First word Latency
Cascading of errors from VAD/ASR
emotion, tone, and other speech features are lost
by Kyutai (pending open source/API release)
(Language Model Can Listen While Speaking) by ByteDance
0.4 by Fixie
by Qwen Team, Alibaba (Whisper encoder to embed melspectrogram, then use it as prefix to LLM)
/2
GPT-4o voice by OpenAI
Cascading Approach
minimum latency: 500ms on cloud via
ASR: whisper via Groq
LLM: Llama via Groq
Ultravox â LLama3 + Whisper Encoder, plans on full duplex, API in beta
Gazelle -> Mistral 7b instruct + wav2vec2 + DPO, API in waitlist
The speech signals are encoded into discrete tokens, and then discrete speech tokens are expanded into the vocabulary of the LLM
Speech/Text to Speech/Text:
non dialogue tasks: LauraGPT, Viola, VoxtLM
Speech/Text to Text:
Encoder + Adapter style models: Following the manner of LLaVA, we leverage the well-trained speech modal encoder and the LLM, which makes LLaSM more resource-friendly. Specifically, we use Whisper as a speech encoder to encode the speech signals into embeddings. Then a modal adaptor learns to align speech embeddings with the input text embeddings of the large language model. The speech embeddings and the text embeddings are concatenated together to form interleaved sequences, then the interleaved sequences are input to the LLM for supervised fine-tuning. The training process is divided into two stages. In the first stage, we use the public ASR datasets for the modality adaptation pre-training. The speech encoder and the LLM are frozen, only the modal adaptor is trained to align the speech and text embeddings. As most of the model parameters remain frozen, only a small part of the parameters from the modal adaptor is trained during this stage, it is not resource-consuming. In the second stage, we use cross-modal instruction data for training to provide the model with the capacity to process cross-modal conversations and handle multi-modal instructions. The speech encoder is frozen while the parameters of the modal adaptor and the language model are updated for cross-modal instruction fine-tuning.
VIsion Language Models
Palm-E: 540B PaLM + 22B Vision Transformer
LLaVA: pre-trained CLIP visual encoder + LLaMA and instruct tuning on GPT4-assisted visual instruction data.
BLIP-2: Flan-T5 with a Q-Former to align visual features with the LM
Cascading Models:
by Tincans
papers folder
( X.com list)
VAD: on edge/onnx
TTS: by Cartesia
Or Locally via modular
architecture
(latest)
by Msft (90 ms) (dialogue trained, tiny base model though)
, Salmon , COSMIC
: high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation) and .