unslothai/unsloth
5X faster 50% less memory LLM finetuning
Language: Python
#ai #finetuning #llm
Stars: 729 Issues: 8 Forks: 19
https://github.com/unslothai/unsloth
5X faster 50% less memory LLM finetuning
Language: Python
#ai #finetuning #llm
Stars: 729 Issues: 8 Forks: 19
https://github.com/unslothai/unsloth
GitHub
GitHub - unslothai/unsloth: Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x…
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM. - unslothai/unsloth
babycommando/neuralgraffiti
Live-bending a foundation model’s output at neural network level.
Language: Jupyter Notebook
#finetuning #liquid_neural_networks #llm #neural_network #pytorch #self_attention #transformers
Stars: 217 Issues: 0 Forks: 16
https://github.com/babycommando/neuralgraffiti
Live-bending a foundation model’s output at neural network level.
Language: Jupyter Notebook
#finetuning #liquid_neural_networks #llm #neural_network #pytorch #self_attention #transformers
Stars: 217 Issues: 0 Forks: 16
https://github.com/babycommando/neuralgraffiti
GitHub
GitHub - babycommando/neuralgraffiti: Live-bending a foundation model’s output at neural network level.
Live-bending a foundation model’s output at neural network level. - babycommando/neuralgraffiti