We have hosted the application chinese llama alpaca 2 in order to run this application in our online workstations with Wine or directly.
Quick description about chinese llama alpaca 2:
This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open-sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2, use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding of Chinese. Performance improvements. The related model supports FlashAttention-2 training, supports 4K context and can be extended up to 18K+ through the NTK method.Features:
- Expanded the new Chinese vocabulary for the Llama-2 model , and opened up the Chinese LLaMA-2 and Alpaca-2 large models
- Open source pre-training scripts and instruction fine-tuning scripts, users can further train the model as needed
- Use the CPU/GPU of a personal computer to quickly quantify and deploy large models locally
- Currently open source models: Chinese-LLaMA-2 (7B/13B), Chinese-Alpaca-2 (7B/13B) (for larger models, please refer to the first phase of the project )
- Optimized Chinese vocabulary
- Efficient attention based on FlashAttention-2
Programming Language: Python.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.