intel-analytics/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
Language:Python
Total stars: 5166
Stars trend:
#python
#analyticszoo, #bigdl, #distributeddeeplearning, #keras, #llm, #python, #pytorch, #scala, #spark, #tensorflow, #transformers
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
Language:Python
Total stars: 5166
Stars trend:
3 Apr 2024
2am ▏ +1
3am ▍ +3
4am ▋ +5
5am ▏ +1
6am +0
7am +0
8am ▏ +1
9am +0
10am ▌ +4
11am █▊ +14
12pm ██▉ +23
1pm ███▏ +25
#python
#analyticszoo, #bigdl, #distributeddeeplearning, #keras, #llm, #python, #pytorch, #scala, #spark, #tensorflow, #transformers