huggingface/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Shell #Dockerfile
⭐: 53410 🍴: 12690
#nlp #natural_language_processing #natural_language_understanding #pytorch #language_model #natural_language_generation #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Shell #Dockerfile
⭐: 53410 🍴: 12690
#nlp #natural_language_processing #natural_language_understanding #pytorch #language_model #natural_language_generation #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python
bigscience-workshop/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 2638 🍴: 64
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 2638 🍴: 64
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Cuda #Shell
⭐: 81827 🍴: 18137
#nlp #natural_language_processing #pytorch #language_model #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python #machine_learning #deep_learning
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Cuda #Shell
⭐: 81827 🍴: 18137
#nlp #natural_language_processing #pytorch #language_model #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python #machine_learning #deep_learning
bigscience-workshop/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 4149 🍴: 132
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
👨💻: #Python #Shell #Dockerfile
⭐: 4149 🍴: 132
#bloom #deep_learning #distributed_systems #language_models #large_language_models #machine_learning #neural_networks #pytorch #volunteer_computing #pipeline_parallelism #tensor_parallelism