Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,17 @@ on:
- main
paths:
- 'notebooks/**'
- '.github/workflows/check-notebookes.yaml'
- 'scripts/**'
- 'snippets/quickstart/**'
- '.github/workflows/check-notebooks.yaml'
pull_request:
branches:
- main
paths:
- 'notebooks/**'
- '.github/workflows/check-notebookes.yaml'
- 'scripts/**'
- 'snippets/quickstart/**'
- '.github/workflows/check-notebooks.yaml'
workflow_dispatch:

jobs:
Expand Down Expand Up @@ -51,6 +55,13 @@ jobs:
uv run nbqa mypy .
echo "✅ Mypy type checks passed!"

- name: Check snippet freshness
working-directory: .
run: |
echo "🔍 Checking that generated snippets match notebook source..."
python3 scripts/generate_snippets.py --check
echo "✅ Snippets are up to date!"

- name: Summary
if: success()
run: |
Expand Down
2 changes: 1 addition & 1 deletion docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"logo": {
"light": "/logo/light.svg",
"dark": "/logo/dark.svg",
"href": "https://liquid.ai"
"href": "/docs/getting-started/welcome"
},
"navbar": {
"links": [
Expand Down
15 changes: 2 additions & 13 deletions docs/fine-tuning/trl.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

## Supervised Fine-Tuning (SFT)[​](#supervised-fine-tuning-sft "Direct link to Supervised Fine-Tuning (SFT)")

[![Colab link](/images/lfm/fine-tuning/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png)](https://colab.research.google.com/github/Liquid4All/docs/blob/main/notebooks/💧_LFM2_5_SFT_with_TRL.ipynb)

Check warning on line 28 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L28

Did you really mean 'Colab'?

Check warning on line 28 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L28

Did you really mean 'Colab'?

The `SFTTrainer` makes it easy to fine-tune LFM models on instruction-following or conversational datasets. It handles chat templates, packing, and dataset formatting automatically. SFT training requires [Instruction datasets](/docs/fine-tuning/datasets#instruction-datasets-sft).

Expand Down Expand Up @@ -130,18 +130,18 @@

## Vision Language Model Fine-Tuning (VLM-SFT)[​](#vision-language-model-fine-tuning-vlm-sft "Direct link to Vision Language Model Fine-Tuning (VLM-SFT)")

[![Colab link](/images/lfm/fine-tuning/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png)](https://colab.research.google.com/github/Liquid4All/docs/blob/main/notebooks/💧_LFM2_5_VL_SFT_with_TRL.ipynb)

Check warning on line 133 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L133

Did you really mean 'Colab'?

Check warning on line 133 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L133

Did you really mean 'Colab'?

The `SFTTrainer` also supports fine-tuning Vision Language Models like `LFM2.5-VL-1.6B` on image-text datasets. VLM fine-tuning requires [Vision datasets](/docs/fine-tuning/datasets#vision-datasets-vlm-sft) and a few key differences from text-only SFT:

* Uses `AutoModelForImageTextToText` instead of `AutoModelForCausalLM`
* Uses `AutoProcessor` instead of just a tokenizer

Check warning on line 138 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L138

Did you really mean 'tokenizer'?

Check warning on line 138 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L138

Did you really mean 'tokenizer'?
* Requires dataset formatting with image content types
* Needs a custom `collate_fn` for multimodal batching

Check warning on line 140 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L140

Did you really mean 'multimodal'?

Check warning on line 140 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L140

Did you really mean 'multimodal'?

### VLM LoRA Fine-Tuning (Recommended)[​](#vlm-lora-fine-tuning-recommended "Direct link to VLM LoRA Fine-Tuning (Recommended)")

LoRA is recommended for VLM fine-tuning due to the larger model size and multimodal complexity:

Check warning on line 144 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L144

Did you really mean 'multimodal'?

Check warning on line 144 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L144

Did you really mean 'multimodal'?

```python
from transformers import AutoModelForImageTextToText, AutoProcessor
Expand Down Expand Up @@ -288,7 +288,7 @@

## Direct Preference Optimization (DPO)[​](#direct-preference-optimization-dpo "Direct link to Direct Preference Optimization (DPO)")

[![Colab link](/images/lfm/fine-tuning/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png)](https://colab.research.google.com/github/Liquid4All/docs/blob/main/notebooks/💧_LFM2_DPO_with_TRL.ipynb)

Check warning on line 291 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/fine-tuning/trl.mdx#L291

Did you really mean 'Colab'?

Check warning on line 291 in docs/fine-tuning/trl.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/fine-tuning/trl.mdx#L291

Did you really mean 'Colab'?

The `DPOTrainer` implements Direct Preference Optimization, a method to align models with human preferences without requiring a separate reward model. DPO training requires [Preference datasets](/docs/fine-tuning/datasets#preference-datasets-dpo) with chosen and rejected response pairs.

Expand Down Expand Up @@ -385,23 +385,12 @@
```
</Accordion>

## Other Training Methods[​](#other-training-methods "Direct link to Other Training Methods")

TRL also provides additional trainers that work seamlessly with LFM models:

* **RewardTrainer**: Train reward models for RLHF
* **PPOTrainer**: Proximal Policy Optimization for reinforcement learning from human feedback
* **ORPOTrainer**: Odds Ratio Preference Optimization, an alternative to DPO
* **KTOTrainer**: Kahneman-Tversky Optimization for alignment

Refer to the [TRL documentation](https://huggingface.co/docs/trl) for detailed guides on these methods.

## Tips[​](#tips "Direct link to Tips")

* **Learning Rates**: SFT typically uses higher learning rates (1e-5 to 5e-5) than DPO (1e-7 to 1e-6)
* **Batch Size**: DPO requires larger effective batch sizes; increase `gradient_accumulation_steps` if GPU memory is limited
* **LoRA Ranks**: Start with `r=16` for experimentation; increase to `r=64` or higher for better quality
* **DPO Beta**: The `beta` parameter controls the deviation from the reference model; typical values range from 0.1 to 0.5
* **LoRA Ranks**: Start with `r=16`. Higher ranks increase adapter memory and parameter count. Set `lora_alpha` (`a`) to `2 * r`
* **DPO Beta**: The `beta` parameter controls the deviation from the reference model. Start with `0.1`

***

Expand Down
46 changes: 0 additions & 46 deletions docs/getting-started/quickstart.mdx

This file was deleted.

6 changes: 3 additions & 3 deletions docs/models/complete-library.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Model Library"
description: "Liquid Foundation Models (LFMs) are a new class of multimodal architectures built for fast inference and on-device deployment. Browse all available models and formats here."

Check warning on line 3 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/models/complete-library.mdx#L3

Did you really mean 'LFMs'?

Check warning on line 3 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/models/complete-library.mdx#L3

Did you really mean 'multimodal'?

Check warning on line 3 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/models/complete-library.mdx#L3

Did you really mean 'LFMs'?

Check warning on line 3 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/models/complete-library.mdx#L3

Did you really mean 'multimodal'?
---

<div className="capabilities">
Expand Down Expand Up @@ -43,15 +43,15 @@

- **GGUF** — Best for local CPU/GPU inference on any platform. Use with [llama.cpp](/docs/inference/llama-cpp), [LM Studio](/docs/inference/lm-studio), or [Ollama](/docs/inference/ollama). Append `-GGUF` to any model name.
- **MLX** — Best for Mac users with Apple Silicon. Leverages unified memory for fast inference via [MLX](/docs/inference/mlx). Browse at [mlx-community](https://huggingface.co/mlx-community/collections?search=LFM).
- **ONNX** — Best for production deployments and edge devices. Cross-platform with ONNX Runtime across CPUs, GPUs, and accelerators. Append `-ONNX` to any model name.

Check warning on line 46 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/models/complete-library.mdx#L46

Did you really mean 'CPUs'?

Check warning on line 46 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/models/complete-library.mdx#L46

Did you really mean 'GPUs'?

Check warning on line 46 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/models/complete-library.mdx#L46

Did you really mean 'CPUs'?

Check warning on line 46 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/models/complete-library.mdx#L46

Did you really mean 'GPUs'?

### Quantization

Quantization reduces model size and speeds up inference with minimal quality loss. Available options by format:

- **GGUF** — Supports `Q2_K`, `Q3_K_M`, `Q4_K_M`, `Q5_K_M`, `Q6_K`, and `Q8_0` quantization levels. `Q4_K_M` offers the best balance of size and quality. `Q8_0` preserves near-full precision.
- **MLX** — Available in `4bit` and `8bit` variants. `8bit` is the default for most models.
- **ONNX** — Supports `FP16` and `INT8` quantization. `INT8` is best for CPU inference; `FP16` for GPU acceleration.
- **GGUF** — Supports `Q4_0`, `Q4_K_M`, `Q5_K_M`, `Q6_K`, `Q8_0`, `BF16`, and `F16`. `Q4_K_M` offers the best balance of size and quality.
- **MLX** — Available in `3bit`, `4bit`, `5bit`, `6bit`, `8bit`, and `BF16`. `8bit` is recommended.
- **ONNX** — Supports `FP32`, `FP16`, `Q4`, and `Q8` (MoE models also support `Q4F16`). `Q4` is recommended for most deployments.

## Model Chart

Expand Down Expand Up @@ -82,7 +82,7 @@
| [LFM2.5-Audio-1.5B](/docs/models/lfm25-audio-1.5b) | [✓](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B) | [✓](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF) | ✗ | [✓](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-ONNX) | Yes (TRL) |
| LFM2 Models | | | | | |
| [LFM2-Audio-1.5B](/docs/models/lfm2-audio-1.5b) | [✓](https://huggingface.co/LiquidAI/LFM2-Audio-1.5B) | [✓](https://huggingface.co/LiquidAI/LFM2-Audio-1.5B-GGUF) | ✗ | ✗ | No |
| **Liquid Nanos** | | | | | |

Check warning on line 85 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai) - vale-spellcheck

docs/models/complete-library.mdx#L85

Did you really mean 'Nanos'?

Check warning on line 85 in docs/models/complete-library.mdx

View check run for this annotation

Mintlify / Mintlify Validation (liquidai-main) - vale-spellcheck

docs/models/complete-library.mdx#L85

Did you really mean 'Nanos'?
| [LFM2-1.2B-Extract](/docs/models/lfm2-1.2b-extract) | [✓](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract) | [✓](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract-GGUF) | ✗ | [✓](https://huggingface.co/onnx-community/LFM2-1.2B-Extract-ONNX) | Yes (TRL) |
| [LFM2-350M-Extract](/docs/models/lfm2-350m-extract) | [✓](https://huggingface.co/LiquidAI/LFM2-350M-Extract) | [✓](https://huggingface.co/LiquidAI/LFM2-350M-Extract-GGUF) | ✗ | [✓](https://huggingface.co/onnx-community/LFM2-350M-Extract-ONNX) | Yes (TRL) |
| [LFM2-350M-ENJP-MT](/docs/models/lfm2-350m-enjp-mt) | [✓](https://huggingface.co/LiquidAI/LFM2-350M-ENJP-MT) | [✓](https://huggingface.co/LiquidAI/LFM2-350M-ENJP-MT-GGUF) | [✓](https://huggingface.co/mlx-community/LFM2-350M-ENJP-MT-8bit) | [✓](https://huggingface.co/onnx-community/LFM2-350M-ENJP-MT-ONNX) | Yes (TRL) |
Expand Down
4 changes: 0 additions & 4 deletions notebooks/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,6 @@ dev = [
"mypy>=1.14.1",
]

[tool.ruff]
# Exclude certain directories
extend-exclude = ["runnable-examples"]

[tool.ruff.lint]
# Select rules to check - focus on correctness, not style
select = [
Expand Down
179 changes: 179 additions & 0 deletions notebooks/quickstart_snippets.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quickstart Snippet Sources\n",
"\n",
"This notebook is the **source of truth** for the Python code in `snippets/quickstart/*.mdx`.\n",
"\n",
"Each code cell is tagged with `\"snippet\": \"<name>\"` in its cell metadata.\n",
"The generation script (`scripts/generate_snippets.py`) reads these cells, replaces\n",
"default model names with template variables, and generates the MDX snippet files.\n",
"\n",
"**Do not edit the MDX files directly.** Edit the code cells here, then run:\n",
"```bash\n",
"python3 scripts/generate_snippets.py\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Text Model Snippets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"snippet": "text-transformers"
},
"outputs": [],
"source": [
"from transformers import AutoModelForCausalLM, AutoTokenizer\n",
"\n",
"model_id = \"LiquidAI/LFM2.5-1.2B-Instruct\"\n",
"model = AutoModelForCausalLM.from_pretrained(\n",
" model_id,\n",
" device_map=\"auto\",\n",
" dtype=\"bfloat16\",\n",
")\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"\n",
"input_ids = tokenizer.apply_chat_template(\n",
" [{\"role\": \"user\", \"content\": \"What is machine learning?\"}],\n",
" add_generation_prompt=True,\n",
" return_tensors=\"pt\",\n",
" tokenize=True,\n",
").to(model.device)\n",
"\n",
"output = model.generate(input_ids, max_new_tokens=512)\n",
"response = tokenizer.decode(output[0][len(input_ids[0]):], skip_special_tokens=True)\n",
"print(response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"snippet": "text-vllm"
},
"outputs": [],
"source": [
"from vllm import LLM, SamplingParams\n",
"\n",
"llm = LLM(model=\"LiquidAI/LFM2.5-1.2B-Instruct\")\n",
"\n",
"sampling_params = SamplingParams(\n",
" temperature=0.3,\n",
" min_p=0.15,\n",
" repetition_penalty=1.05,\n",
" max_tokens=512,\n",
")\n",
"\n",
"output = llm.chat(\"What is machine learning?\", sampling_params)\n",
"print(output[0].outputs[0].text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Vision Model Snippets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"snippet": "vl-transformers"
},
"outputs": [],
"source": [
"from transformers import AutoProcessor, AutoModelForImageTextToText\n",
"from transformers.image_utils import load_image\n",
"\n",
"model_id = \"LiquidAI/LFM2.5-VL-1.6B\"\n",
"model = AutoModelForImageTextToText.from_pretrained(\n",
" model_id,\n",
" device_map=\"auto\",\n",
" dtype=\"bfloat16\",\n",
")\n",
"processor = AutoProcessor.from_pretrained(model_id)\n",
"\n",
"url = \"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\"\n",
"image = load_image(url)\n",
"\n",
"conversation = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"image\", \"image\": image},\n",
" {\"type\": \"text\", \"text\": \"What is in this image?\"},\n",
" ],\n",
" },\n",
"]\n",
"\n",
"inputs = processor.apply_chat_template(\n",
" conversation,\n",
" add_generation_prompt=True,\n",
" return_tensors=\"pt\",\n",
" return_dict=True,\n",
" tokenize=True,\n",
").to(model.device)\n",
"\n",
"outputs = model.generate(**inputs, max_new_tokens=256)\n",
"response = processor.batch_decode(outputs, skip_special_tokens=True)[0]\n",
"print(response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"snippet": "vl-vllm"
},
"outputs": [],
"source": [
"from vllm import LLM, SamplingParams\n",
"\n",
"IMAGE_URL = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\n",
"\n",
"llm = LLM(\n",
" model=\"LiquidAI/LFM2.5-VL-1.6B\",\n",
" max_model_len=1024,\n",
")\n",
"\n",
"sampling_params = SamplingParams(\n",
" temperature=0.0,\n",
" max_tokens=256,\n",
")\n",
"\n",
"messages = [{\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": IMAGE_URL}},\n",
" {\"type\": \"text\", \"text\": \"Describe what you see in this image.\"},\n",
" ],\n",
"}]\n",
"\n",
"outputs = llm.chat(messages, sampling_params)\n",
"print(outputs[0].outputs[0].text)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
77 changes: 0 additions & 77 deletions quickstarts/LFM2-1.2B__ollama.md

This file was deleted.

Loading