Siddish's Public Notes
  • đŸŒŋWelcome !
  • 🍭Curations
  • đŸ–‹ī¸Quotes
  • AI
    • â„šī¸Prompting
      • Prompt for Brainstorming
    • 🎮Prompt Hacking
    • Voice Models
    • 🌱AI Copilots
    • 🚂Data Engine
  • 🔮Design for AI
  • WIP
    • What's on top of my mind
    • đŸ”ĸEmbeddings
Powered by GitBook
On this page
  • State of the Art:
  • Notes

Was this helpful?

  1. AI

Voice Models

Native → Text + Speech as Input and Text + Speech as Output;

PreviousPrompt HackingNextAI Copilots

Last updated 8 months ago

Was this helpful?

WIP still, catching up with current open/commercial options and references to DIY

Current Cascading Approach: VAD → ASR → LLM → TTS

Has problems:

  • Interruptions

  • First word Latency

  • Cascading of errors from VAD/ASR

  • emotion, tone, and other speech features are lost

State of the Art:

Full Duplex Models (end to end continuous speech in/out):

by Kyutai (pending open source/API release)

(Language Model Can Listen While Speaking) by ByteDance

Turn Based Models with Speech as Input:

0.4 by Fixie

by Qwen Team, Alibaba (Whisper encoder to embed melspectrogram, then use it as prefix to LLM)

/2

GPT-4o voice by OpenAI


Notes

Cascading Approach

minimum latency: 500ms on cloud via

ASR: whisper via Groq

LLM: Llama via Groq

Ultravox → LLama3 + Whisper Encoder, plans on full duplex, API in beta

Gazelle -> Mistral 7b instruct + wav2vec2 + DPO, API in waitlist

The speech signals are encoded into discrete tokens, and then discrete speech tokens are expanded into the vocabulary of the LLM

Speech/Text to Speech/Text:

non dialogue tasks: LauraGPT, Viola, VoxtLM

Speech/Text to Text:

  • Encoder + Adapter style models: Following the manner of LLaVA, we leverage the well-trained speech modal encoder and the LLM, which makes LLaSM more resource-friendly. Specifically, we use Whisper as a speech encoder to encode the speech signals into embeddings. Then a modal adaptor learns to align speech embeddings with the input text embeddings of the large language model. The speech embeddings and the text embeddings are concatenated together to form interleaved sequences, then the interleaved sequences are input to the LLM for supervised fine-tuning. The training process is divided into two stages. In the first stage, we use the public ASR datasets for the modality adaptation pre-training. The speech encoder and the LLM are frozen, only the modal adaptor is trained to align the speech and text embeddings. As most of the model parameters remain frozen, only a small part of the parameters from the modal adaptor is trained during this stage, it is not resource-consuming. In the second stage, we use cross-modal instruction data for training to provide the model with the capacity to process cross-modal conversations and handle multi-modal instructions. The speech encoder is frozen while the parameters of the modal adaptor and the language model are updated for cross-modal instruction fine-tuning.

VIsion Language Models

  • Palm-E: 540B PaLM + 22B Vision Transformer

  • LLaVA: pre-trained CLIP visual encoder + LLaMA and instruct tuning on GPT4-assisted visual instruction data.

  • BLIP-2: Flan-T5 with a Q-Former to align visual features with the LM

Cascading Models:

by Tincans

papers folder

( X.com list)

VAD: on edge/onnx

TTS: by Cartesia

Or Locally via modular

architecture

(latest)

by Msft (90 ms) (dialogue trained, tiny base model though)

, Salmon , COSMIC

: high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation) and .

Moshi.chat
LSLM
Ultravox
Qwen2-Audio-Chat
AnyGPT
SpeechGPT
llama-3-s
Gazelle
0.2
Shuka v1 by Sarvam (Indic)
Yet to read
People to follow
silero-vad
Sonic
HF pipeline
LSLM
Qwen2-Audio-Chat
WavLLM
SenseVoice
Audio Flamingo
GAMA Audio
LTU-AS
LLaSM
Parler-tts
consistent voices