UFund-Me/Qbot
[π₯updating ...] θͺε¨ιεδΊ€ζζΊε¨δΊΊ Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment. https://ufund-me.github.io/Qbot :news: qbot-mini: https://github.com/Charmve/iQuant
Language: Jupyter Notebook
Total stars: 994
Stars trend:
21 May 2023
22 May 2023
#jupyternotebook
#funds, #machinelearning, #pytrade, #quantitativefinance, #quantitativetrading, #quantization, #strategies, #trademarks
[π₯updating ...] θͺε¨ιεδΊ€ζζΊε¨δΊΊ Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment. https://ufund-me.github.io/Qbot :news: qbot-mini: https://github.com/Charmve/iQuant
Language: Jupyter Notebook
Total stars: 994
Stars trend:
21 May 2023
9pm β +2
10pm β +2
11pm β +7
22 May 2023
12am β +8
1am βββββ +35
2am ββββ +32
3am βββββ +33
4am βββ +17
5am ββ +13
6am ββββ +27
7am ββ +15
8am ββββ +26
#jupyternotebook
#funds, #machinelearning, #pytrade, #quantitativefinance, #quantitativetrading, #quantization, #strategies, #trademarks
guillaumekln/faster-whisper
Faster Whisper transcription with CTranslate2
Language: Python
Total stars: 3284
Stars trend:
19 Jul 2023
#python
#deeplearning, #inference, #openai, #quantization, #speechrecognition, #speechtotext, #transformer, #whisper
Faster Whisper transcription with CTranslate2
Language: Python
Total stars: 3284
Stars trend:
19 Jul 2023
1am β +3
2am β +3
3am β +2
4am βββ +20
5am βββββββ +54
6am ββββββ +41
7am βββ +17
8am βββ +23
9am ββ +11
10am β +8
11am β +7
12pm ββ +11
#python
#deeplearning, #inference, #openai, #quantization, #speechrecognition, #speechtotext, #transformer, #whisper
dvmazur/mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops
Language:Python
Total stars: 521
Stars trend:
#python
#colabnotebook, #deeplearning, #googlecolab, #languagemodel, #llm, #mixtureofexperts, #offloading, #pytorch, #quantization
Run Mixtral-8x7B models in Colab or consumer desktops
Language:Python
Total stars: 521
Stars trend:
1 Jan 2024
6am β +3
7am +0
8am β +2
9am β +3
10am +0
11am β +7
12pm βββ +24
1pm βββ +21
2pm ββββ +25
3pm βββ +21
4pm ββββ +27
5pm ββββ +32
#python
#colabnotebook, #deeplearning, #googlecolab, #languagemodel, #llm, #mixtureofexperts, #offloading, #pytorch, #quantization
hiyouga/LLaMA-Factory
Unify Efficient Fine-tuning of 100+ LLMs
Language:Python
Total stars: 12204
Stars trend:
#python
#agent, #baichuan, #chatglm, #finetuning, #generativeai, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llm, #lora, #mistral, #mixtureofexperts, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
Unify Efficient Fine-tuning of 100+ LLMs
Language:Python
Total stars: 12204
Stars trend:
28 Feb 2024
2am β +5
3am β +5
4am +0
5am +0
6am β +7
7am β +8
8am ββ +10
9am β +7
10am β +2
11am β +3
12pm β +5
1pm β +3
#python
#agent, #baichuan, #chatglm, #finetuning, #generativeai, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llm, #lora, #mistral, #mixtureofexperts, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
RahulSChand/gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
Language:JavaScript
Total stars: 878
Stars trend:
#javascript
#ggml, #gpu, #huggingface, #languagemodel, #llama, #llama2, #llamacpp, #llm, #pytorch, #quantization
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
Language:JavaScript
Total stars: 878
Stars trend:
5 Oct 2024
9am β +5
10am β +1
11am β +7
12pm β +4
1pm ββ +10
2pm ββ +9
3pm ββ +9
4pm ββ +11
5pm β +6
6pm ββ +10
7pm ββ +11
#javascript
#ggml, #gpu, #huggingface, #languagemodel, #llama, #llama2, #llamacpp, #llm, #pytorch, #quantization
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
Language:Jupyter Notebook
Total stars: 262
Stars trend:
#jupyternotebook
#finetuning, #finetuning, #finetuningllms, #inference, #largelanguagemodels, #llm, #python, #quantization
Mastering Applied AI, One Concept at a Time
Language:Jupyter Notebook
Total stars: 262
Stars trend:
3 Dec 2024
7pm β +2
8pm β +3
9pm +0
10pm β +1
11pm β +3
4 Dec 2024
12am β +6
1am ββ +9
2am ββ +15
3am ββ +9
4am ββ +11
5am ββ +11
6am ββ +11
#jupyternotebook
#finetuning, #finetuning, #finetuningllms, #inference, #largelanguagemodels, #llm, #python, #quantization
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 38120
Stars trend:
#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 38120
Stars trend:
14 Jan 2025
12am β +1
1am β +1
2am β +6
3am ββ +11
4am β +5
5am β +5
6am ββ +9
7am βββ +19
8am β +8
9am ββ +10
10am β +5
11am β +6
#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
dipampaul17/KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Language:Python
Total stars: 144
Stars trend:
#python
#applesilicon, #generativeai, #kvcache, #llamacpp, #llm, #m1, #m2, #m3, #memoryoptimization, #metal, #optimization, #quantization
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Language:Python
Total stars: 144
Stars trend:
16 May 2025
7pm β +1
8pm ββββββ +44
9pm βββββ +38
10pm ββββ +29
11pm βββ +18
#python
#applesilicon, #generativeai, #kvcache, #llamacpp, #llm, #m1, #m2, #m3, #memoryoptimization, #metal, #optimization, #quantization
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 49776
Stars trend:
#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 49776
Stars trend:
24 May 2025
6am β +1
7am β +8
8am β +5
9am ββ +10
10am ββ +13
11am βββ +24
12pm ββ +14
1pm ββ +12
2pm ββ +14
3pm βββ +20
4pm ββββ +25
5pm βββ +17
#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers