Code Stars
1.93K subscribers
9.34K photos
9.63K links
Code Stars alerts you to GitHub repos gaining stars rapidly. Stay ahead of the curve and discover trending projects before they go viral! #AI #GitHub #OpenSource #Tech #MachineLearning #Python #Programming #Java #Javascript #React #Docker #Devops
Download Telegram
UFund-Me/Qbot
[πŸ”₯updating ...] θ‡ͺεŠ¨ι‡εŒ–δΊ€ζ˜“ζœΊε™¨δΊΊ Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment. https://ufund-me.github.io/Qbot :news: qbot-mini: https://github.com/Charmve/iQuant
Language: Jupyter Notebook
Total stars: 994
Stars trend:
21 May 2023
 9pm β–Ž +2

10pm β–Ž +2

11pm β–‰ +7

22 May 2023
12am β–ˆ +8

 1am β–ˆβ–ˆβ–ˆβ–ˆβ– +35

 2am β–ˆβ–ˆβ–ˆβ–ˆ +32

 3am β–ˆβ–ˆβ–ˆβ–ˆβ– +33

 4am β–ˆβ–ˆβ– +17

 5am β–ˆβ–‹ +13

 6am β–ˆβ–ˆβ–ˆβ– +27

 7am β–ˆβ–‰ +15

 8am β–ˆβ–ˆβ–ˆβ–Ž +26

#jupyternotebook
#funds, #machinelearning, #pytrade, #quantitativefinance, #quantitativetrading, #quantization, #strategies, #trademarks
guillaumekln/faster-whisper
Faster Whisper transcription with CTranslate2
Language: Python
Total stars: 3284
Stars trend:
19 Jul 2023
 1am ▍ +3

 2am ▍ +3

 3am β–Ž +2

 4am β–ˆβ–ˆβ–Œ +20

 5am β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š +54

 6am β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– +41

 7am β–ˆβ–ˆβ– +17

 8am β–ˆβ–ˆβ–‰ +23

 9am β–ˆβ– +11

10am β–ˆ +8

11am β–‰ +7

12pm β–ˆβ– +11

#python
#deeplearning, #inference, #openai, #quantization, #speechrecognition, #speechtotext, #transformer, #whisper
dvmazur/mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops
Language:Python
Total stars: 521
Stars trend:
1 Jan 2024
6am ▍ +3
7am +0
8am β–Ž +2
9am ▍ +3
10am +0
11am β–‰ +7
12pm β–ˆβ–ˆβ–ˆ +24
1pm β–ˆβ–ˆβ–‹ +21
2pm β–ˆβ–ˆβ–ˆβ– +25
3pm β–ˆβ–ˆβ–‹ +21
4pm β–ˆβ–ˆβ–ˆβ– +27
5pm β–ˆβ–ˆβ–ˆβ–ˆ +32

#python
#colabnotebook, #deeplearning, #googlecolab, #languagemodel, #llm, #mixtureofexperts, #offloading, #pytorch, #quantization
πŸ‘2
hiyouga/LLaMA-Factory
Unify Efficient Fine-tuning of 100+ LLMs
Language:Python
Total stars: 12204
Stars trend:
28 Feb 2024
2am β–‹ +5
3am β–‹ +5
4am +0
5am +0
6am β–‰ +7
7am β–ˆ +8
8am β–ˆβ–Ž +10
9am β–‰ +7
10am β–Ž +2
11am ▍ +3
12pm β–‹ +5
1pm ▍ +3

#python
#agent, #baichuan, #chatglm, #finetuning, #generativeai, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llm, #lora, #mistral, #mixtureofexperts, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
πŸ‘3
RahulSChand/gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
Language:JavaScript
Total stars: 878
Stars trend:
5 Oct 2024
9am β–‹ +5
10am ▏ +1
11am β–‰ +7
12pm β–Œ +4
1pm β–ˆβ–Ž +10
2pm β–ˆβ– +9
3pm β–ˆβ– +9
4pm β–ˆβ– +11
5pm β–Š +6
6pm β–ˆβ–Ž +10
7pm β–ˆβ– +11

#javascript
#ggml, #gpu, #huggingface, #languagemodel, #llama, #llama2, #llamacpp, #llm, #pytorch, #quantization
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
Language:Jupyter Notebook
Total stars: 262
Stars trend:
3 Dec 2024
7pm β–Ž +2
8pm ▍ +3
9pm +0
10pm ▏ +1
11pm ▍ +3
4 Dec 2024
12am β–Š +6
1am β–ˆβ– +9
2am β–ˆβ–‰ +15
3am β–ˆβ– +9
4am β–ˆβ– +11
5am β–ˆβ– +11
6am β–ˆβ– +11

#jupyternotebook
#finetuning, #finetuning, #finetuningllms, #inference, #largelanguagemodels, #llm, #python, #quantization
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 38120
Stars trend:
14 Jan 2025
12am ▏ +1
1am ▏ +1
2am β–Š +6
3am β–ˆβ– +11
4am β–‹ +5
5am β–‹ +5
6am β–ˆβ– +9
7am β–ˆβ–ˆβ– +19
8am β–ˆ +8
9am β–ˆβ–Ž +10
10am β–‹ +5
11am β–Š +6

#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers
dipampaul17/KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Language:Python
Total stars: 144
Stars trend:
16 May 2025
7pm ▏ +1
8pm β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ +44
9pm β–ˆβ–ˆβ–ˆβ–ˆβ–Š +38
10pm β–ˆβ–ˆβ–ˆβ–‹ +29
11pm β–ˆβ–ˆβ–Ž +18

#python
#applesilicon, #generativeai, #kvcache, #llamacpp, #llm, #m1, #m2, #m3, #memoryoptimization, #metal, #optimization, #quantization
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Language:Python
Total stars: 49776
Stars trend:
24 May 2025
6am ▏ +1
7am β–ˆ +8
8am β–‹ +5
9am β–ˆβ–Ž +10
10am β–ˆβ–‹ +13
11am β–ˆβ–ˆβ–ˆ +24
12pm β–ˆβ–Š +14
1pm β–ˆβ–Œ +12
2pm β–ˆβ–Š +14
3pm β–ˆβ–ˆβ–Œ +20
4pm β–ˆβ–ˆβ–ˆβ– +25
5pm β–ˆβ–ˆβ– +17

#python
#agent, #ai, #chatglm, #finetuning, #gpt, #instructiontuning, #languagemodel, #largelanguagemodels, #llama, #llama3, #llm, #lora, #mistral, #moe, #peft, #qlora, #quantization, #qwen, #rlhf, #transformers