Skip to content

Commit f69a87f

Browse files
authored
Update README.md
1 parent 15c506c commit f69a87f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Fundamental research to develop new architectures for foundation models and A(G)
1414
- Efficiency - [**X-MoE**](https://arxiv.org/abs/2204.09179): scalable & finetunable sparse Mixture-of-Experts (MoE)
1515

1616
### Revolutionizing Transformers for (M)LLMs and AI
17-
> [**BitNet**](https://arxiv.org/abs/2310.11453): 1-bit Transformers for Large Language Models
17+
- [**BitNet**](https://arxiv.org/abs/2310.11453): 1-bit Transformers for Large Language Models
1818
- [**RetNet**](https://arxiv.org/abs/2307.08621): Retentive Network: A Successor to Transformer for Large Language Models
1919
- [**LongNet**](https://arxiv.org/abs/2307.02486): Scaling Transformers to 1,000,000,000 Tokens
2020

0 commit comments

Comments
 (0)