bigscience-workshop/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 2638 🍴: 64
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 2638 🍴: 64
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
bigscience-workshop/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 4149 🍴: 132
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 4149 🍴: 132
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism