tomaarsen/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, and without retraining
Language: Python
#llm #llms #nlp #python #transformers
Stars: 224 Issues: 2 Forks: 11
https://github.com/tomaarsen/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, and without retraining
Language: Python
#llm #llms #nlp #python #transformers
Stars: 224 Issues: 2 Forks: 11
https://github.com/tomaarsen/attention_sinks
GitHub
GitHub - tomaarsen/attention_sinks: Extend existing LLMs way beyond the original training length with constant memory usage, without…
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining - tomaarsen/attention_sinks
lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
GitHub
GitHub - lucidrains/meshgpt-pytorch: Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch - lucidrains/meshgpt-pytorch
kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
GitHub
GitHub - kyegomez/MultiModalMamba: A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi…
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever. - kyegomez/MultiModalMamba
lucidrains/self-rewarding-lm-pytorch
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI
Language: Python
#artificial_intelligence #beyond_human_data #deep_learning #self_rewarding #transformers
Stars: 367 Issues: 0 Forks: 17
https://github.com/lucidrains/self-rewarding-lm-pytorch
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI
Language: Python
#artificial_intelligence #beyond_human_data #deep_learning #self_rewarding #transformers
Stars: 367 Issues: 0 Forks: 17
https://github.com/lucidrains/self-rewarding-lm-pytorch
GitHub
GitHub - lucidrains/self-rewarding-lm-pytorch: Implementation of the training framework proposed in Self-Rewarding Language Model…
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI - lucidrains/self-rewarding-lm-pytorch
FoundationVision/VAR
[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction"
Language: Python
#auto_regressive_model #diffusion_models #image_generation #transformers
Stars: 440 Issues: 6 Forks: 10
https://github.com/FoundationVision/VAR
[GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction"
Language: Python
#auto_regressive_model #diffusion_models #image_generation #transformers
Stars: 440 Issues: 6 Forks: 10
https://github.com/FoundationVision/VAR
GitHub
GitHub - FoundationVision/VAR: [NeurIPS 2024 Best Paper][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl.…
[NeurIPS 2024 Best Paper][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction&...
elicit/machine-learning-list
#artificial_intelligence #language_model #machine_learning #transformers
Stars: 188 Issues: 0 Forks: 9
https://github.com/elicit/machine-learning-list
#artificial_intelligence #language_model #machine_learning #transformers
Stars: 188 Issues: 0 Forks: 9
https://github.com/elicit/machine-learning-list
GitHub
GitHub - elicit/machine-learning-list: A curriculum for learning about foundation models, from scratch to the frontier
A curriculum for learning about foundation models, from scratch to the frontier - elicit/machine-learning-list
AdityaNG/kan-gpt
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
Language: Python
#gpt #kanformers #kolmogorov_arnold_networks #kolmogorov_arnold_representation #llm #text_generation #transformers
Stars: 217 Issues: 2 Forks: 11
https://github.com/AdityaNG/kan-gpt
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
Language: Python
#gpt #kanformers #kolmogorov_arnold_networks #kolmogorov_arnold_representation #llm #text_generation #transformers
Stars: 217 Issues: 2 Forks: 11
https://github.com/AdityaNG/kan-gpt
GitHub
GitHub - AdityaNG/kan-gpt: The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks…
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling - AdityaNG/kan-gpt
buaacyw/MeshAnythingV2
From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"
Language: Python
#3d #3d_generation #generative_model #mesh #transformers
Stars: 305 Issues: 3 Forks: 12
https://github.com/buaacyw/MeshAnythingV2
From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"
Language: Python
#3d #3d_generation #generative_model #mesh #transformers
Stars: 305 Issues: 3 Forks: 12
https://github.com/buaacyw/MeshAnythingV2
GitHub
GitHub - buaacyw/MeshAnythingV2: From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh…
From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization" - buaacyw/MeshAnythingV2
edwko/OuteTTS
Interface for OuteTTS models.
Language: Python
#gguf #llama #text_to_speech #transformers #tts
Stars: 278 Issues: 6 Forks: 13
https://github.com/edwko/OuteTTS
Interface for OuteTTS models.
Language: Python
#gguf #llama #text_to_speech #transformers #tts
Stars: 278 Issues: 6 Forks: 13
https://github.com/edwko/OuteTTS
GitHub
GitHub - edwko/OuteTTS: Interface for OuteTTS models.
Interface for OuteTTS models. Contribute to edwko/OuteTTS development by creating an account on GitHub.
babycommando/neuralgraffiti
Live-bending a foundation model’s output at neural network level.
Language: Jupyter Notebook
#finetuning #liquid_neural_networks #llm #neural_network #pytorch #self_attention #transformers
Stars: 217 Issues: 0 Forks: 16
https://github.com/babycommando/neuralgraffiti
Live-bending a foundation model’s output at neural network level.
Language: Jupyter Notebook
#finetuning #liquid_neural_networks #llm #neural_network #pytorch #self_attention #transformers
Stars: 217 Issues: 0 Forks: 16
https://github.com/babycommando/neuralgraffiti
GitHub
GitHub - babycommando/neuralgraffiti: Live-bending a foundation model’s output at neural network level.
Live-bending a foundation model’s output at neural network level. - babycommando/neuralgraffiti