Skip to content

Commit 0392fd5

Browse files
committed
Add gpt-oss-20b to llm-chatbot notebooks
1 parent f0183e3 commit 0392fd5

File tree

4 files changed

+10
-2
lines changed

4 files changed

+10
-2
lines changed

notebooks/llm-chatbot/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,7 @@ For more details, please refer to [model_card](https://huggingface.co/Qwen/Qwen2
8181
* **GLM-Z1-32B-0414** - GLM-Z1-32B-0414 is a reasoning model with deep thinking capabilities. This was developed based on GLM-4-32B-0414 through cold start, extended reinforcement learning, and further training on tasks including mathematics, code, and logic. Compared to the base model, GLM-Z1-32B-0414 significantly improves mathematical abilities and the capability to solve complex tasks. You can find more info in [model card](https://huggingface.co/THUDM/GLM-Z1-9B-0414).
8282
* **Qwen3-1.7/4B/8B/14B** - Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5. You can find more info in [model card](https://huggingface.co/Qwen/Qwen3-8B).
8383
* **AFM-4.5B** - AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. You can find more info in [model card](https://huggingface.co/arcee-ai/AFM-4.5B).
84+
* **gpt-oss-20b** - gpt-oss-20b is a 20 billion parameter open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. You can find more info in [model card](https://huggingface.co/openai/gpt-oss-20b).
8485

8586
The image below illustrates the provided user instruction and model answer examples.
8687

notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@
8787
"\"datasets<4.0.0\" \\\n",
8888
"\"accelerate\" \\\n",
8989
"\"gradio>=4.19\" \\\n",
90-
"\"transformers==4.53.3\" \\\n",
90+
"\"transformers==4.55.4\" \\\n",
9191
"\"huggingface-hub===0.35.3\" \\\n",
9292
"\"einops\" \"transformers_stream_generator\" \"tiktoken\" \"bitsandbytes\"\n",
9393
"\n",
@@ -446,6 +446,7 @@
446446
" * scale_estimation: **True**\n",
447447
" * dataset: **wikitext2**\n",
448448
"* **AFM-4.5B** - AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. You can find more info in [model card](https://huggingface.co/arcee-ai/AFM-4.5B).\n",
449+
"* **gpt-oss-20b** - gpt-oss-20b is a 20 billion parameter open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. You can find more info in [model card](https://huggingface.co/openai/gpt-oss-20b).\n",
449450
"</details>"
450451
]
451452
},

notebooks/llm-chatbot/llm-chatbot.ipynb

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@
8888
"\"accelerate\" \\\n",
8989
"\"gradio>=4.19\" \\\n",
9090
"\"huggingface-hub==0.35.3\" \\\n",
91-
" \"einops\" \"transformers==4.53.3\" \"transformers_stream_generator\" \"tiktoken\" \"bitsandbytes\"\n",
91+
" \"einops\" \"transformers==4.55.4\" \"transformers_stream_generator\" \"tiktoken\" \"bitsandbytes\"\n",
9292
"\n",
9393
"if platform.system() == \"Darwin\":\n",
9494
" %pip install -q \"numpy<2.0.0\""
@@ -341,6 +341,7 @@
341341
" * scale_estimation: **True**\n",
342342
" * dataset: **wikitext2**\n",
343343
"* **AFM-4.5B** - AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. You can find more info in [model card](https://huggingface.co/arcee-ai/AFM-4.5B).\n",
344+
"* **gpt-oss-20b** - gpt-oss-20b is a 20 billion parameter open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. You can find more info in [model card](https://huggingface.co/openai/gpt-oss-20b).\n",
344345
" </detals>\n"
345346
]
346347
},

utils/llm_config.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -499,6 +499,11 @@ def qwen_completion_to_prompt(completion):
499499
"remote_code": False,
500500
"start_message": DEFAULT_SYSTEM_PROMPT,
501501
},
502+
"gpt-oss-20b": {
503+
"model_id": "openai/gpt-oss-20b",
504+
"remote_code": False,
505+
"start_message": DEFAULT_SYSTEM_PROMPT
506+
}
502507
},
503508
"Chinese": {
504509
"minicpm4-8b": {"model_id": "openbmb/MiniCPM4-8B", "remote_code": True, "start_message": DEFAULT_SYSTEM_PROMPT_CHINESE},

0 commit comments

Comments
 (0)