lucidrains/parti-pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #text_to_image #transformers
Stars: 143 Issues: 0 Forks: 2
https://github.com/lucidrains/parti-pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #deep_learning #text_to_image #transformers
Stars: 143 Issues: 0 Forks: 2
https://github.com/lucidrains/parti-pytorch
GitHub
GitHub - lucidrains/parti-pytorch: Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch
Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorch - lucidrains/parti-pytorch
lucidrains/audiolm-pytorch
Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #audio_synthesis #deep_learning #transformers
Stars: 121 Issues: 1 Forks: 1
https://github.com/lucidrains/audiolm-pytorch
Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #audio_synthesis #deep_learning #transformers
Stars: 121 Issues: 1 Forks: 1
https://github.com/lucidrains/audiolm-pytorch
GitHub
GitHub - lucidrains/audiolm-pytorch: Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google…
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch - lucidrains/audiolm-pytorch
lucidrains/make-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #axial_convolutions #deep_learning #text_to_video
Stars: 331 Issues: 1 Forks: 15
https://github.com/lucidrains/make-a-video-pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #axial_convolutions #deep_learning #text_to_video
Stars: 331 Issues: 1 Forks: 15
https://github.com/lucidrains/make-a-video-pytorch
GitHub
GitHub - lucidrains/make-a-video-pytorch: Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch
Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch - lucidrains/make-a-video-pytorch
lucidrains/robotic-transformer-pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #robotics #transformers
Stars: 128 Issues: 1 Forks: 3
https://github.com/lucidrains/robotic-transformer-pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #robotics #transformers
Stars: 128 Issues: 1 Forks: 3
https://github.com/lucidrains/robotic-transformer-pytorch
GitHub
GitHub - lucidrains/robotic-transformer-pytorch: Implementation of RT1 (Robotic Transformer) in Pytorch
Implementation of RT1 (Robotic Transformer) in Pytorch - lucidrains/robotic-transformer-pytorch
lucidrains/muse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #text_to_image #transformers
Stars: 119 Issues: 1 Forks: 6
https://github.com/lucidrains/muse-maskgit-pytorch
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #text_to_image #transformers
Stars: 119 Issues: 1 Forks: 6
https://github.com/lucidrains/muse-maskgit-pytorch
GitHub
GitHub - lucidrains/muse-maskgit-pytorch: Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers,…
Implementation of Muse: Text-to-Image Generation via Masked Generative Transformers, in Pytorch - lucidrains/muse-maskgit-pytorch
lucidrains/musiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
#artificial_intelligence #attention_mechanisms #deep_learning #music_synthesis #transformers
Stars: 277 Issues: 1 Forks: 8
https://github.com/lucidrains/musiclm-pytorch
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch
#artificial_intelligence #attention_mechanisms #deep_learning #music_synthesis #transformers
Stars: 277 Issues: 1 Forks: 8
https://github.com/lucidrains/musiclm-pytorch
GitHub
GitHub - lucidrains/musiclm-pytorch: Implementation of MusicLM, Google's new SOTA model for music generation using attention networks…
Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch - lucidrains/musiclm-pytorch
lucidrains/toolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Language: Python
#api_calling #artificial_intelligence #attention_mechanisms #deep_learning #transformers
Stars: 419 Issues: 2 Forks: 13
https://github.com/lucidrains/toolformer-pytorch
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Language: Python
#api_calling #artificial_intelligence #attention_mechanisms #deep_learning #transformers
Stars: 419 Issues: 2 Forks: 13
https://github.com/lucidrains/toolformer-pytorch
GitHub
GitHub - lucidrains/toolformer-pytorch: Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI
Implementation of Toolformer, Language Models That Can Use Tools, by MetaAI - lucidrains/toolformer-pytorch
lucidrains/recurrent-memory-transformer-pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #long_context #memory #recurrence #transformers
Stars: 223 Issues: 0 Forks: 4
https://github.com/lucidrains/recurrent-memory-transformer-pytorch
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #long_context #memory #recurrence #transformers
Stars: 223 Issues: 0 Forks: 4
https://github.com/lucidrains/recurrent-memory-transformer-pytorch
GitHub
GitHub - lucidrains/recurrent-memory-transformer-pytorch: Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in…
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch - lucidrains/recurrent-memory-transformer-pytorch
lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #learned_tokenization #long_context #transformers
Stars: 204 Issues: 0 Forks: 10
https://github.com/lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #learned_tokenization #long_context #transformers
Stars: 204 Issues: 0 Forks: 10
https://github.com/lucidrains/MEGABYTE-pytorch
GitHub
GitHub - lucidrains/MEGABYTE-pytorch: Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers…
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch - lucidrains/MEGABYTE-pytorch
lucidrains/soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #audio_generation #deep_learning #non_autoregressive #transformers
Stars: 181 Issues: 0 Forks: 6
https://github.com/lucidrains/soundstorm-pytorch
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanism #audio_generation #deep_learning #non_autoregressive #transformers
Stars: 181 Issues: 0 Forks: 6
https://github.com/lucidrains/soundstorm-pytorch
GitHub
GitHub - lucidrains/soundstorm-pytorch: Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind…
Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch - lucidrains/soundstorm-pytorch
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Language: Python
#artificial_intelligence #attention #attention_is_all_you_need #attention_mechanisms #chatgpt #context_length #gpt3 #gpt4 #machine_learning #transformer
Stars: 381 Issues: 4 Forks: 55
https://github.com/kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Language: Python
#artificial_intelligence #attention #attention_is_all_you_need #attention_mechanisms #chatgpt #context_length #gpt3 #gpt4 #machine_learning #transformer
Stars: 381 Issues: 4 Forks: 55
https://github.com/kyegomez/LongNet
GitHub
GitHub - kyegomez/LongNet: Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens" - kyegomez/LongNet
lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Language: Python
#artificial_intelligence #attention_mechanisms #deep_learning #mesh_generation #transformers
Stars: 195 Issues: 0 Forks: 7
https://github.com/lucidrains/meshgpt-pytorch
GitHub
GitHub - lucidrains/meshgpt-pytorch: Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch - lucidrains/meshgpt-pytorch
kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.
Language: Python
#ai #artificial_intelligence #attention_mechanism #machine_learning #mamba #ml #pytorch #ssm #torch #transformer_architecture #transformers #zeta
Stars: 264 Issues: 0 Forks: 9
https://github.com/kyegomez/MultiModalMamba
GitHub
GitHub - kyegomez/MultiModalMamba: A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi…
A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever. - kyegomez/MultiModalMamba
thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Language: Python
#attention #inference_acceleration #llm #quantization
Stars: 145 Issues: 6 Forks: 3
https://github.com/thu-ml/SageAttention
Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Language: Python
#attention #inference_acceleration #llm #quantization
Stars: 145 Issues: 6 Forks: 3
https://github.com/thu-ml/SageAttention
GitHub
GitHub - thu-ml/SageAttention: Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers…
Quantized Attention that achieves speedups of 2.1x and 2.7x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models. - thu-ml/SageAttention