huggingface/transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Shell #Dockerfile
⭐: 53410 🍴: 12690
#nlp #natural_language_processing #natural_language_understanding #pytorch #language_model #natural_language_generation #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Shell #Dockerfile
⭐: 53410 🍴: 12690
#nlp #natural_language_processing #natural_language_understanding #pytorch #language_model #natural_language_generation #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python
This media is not supported in your browser
VIEW IN TELEGRAM
deepset-ai/haystack
Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
👨💻: #Python #Jupyter_Notebook #Dockerfile
⭐: 3457 🍴: 595
#nlp #question_answering #bert #transfer_learning #language_model #pytorch #semantic_search #neural_search #squad #elasticsearch #dpr #information_retrieval #summarization #search_engine #transformers #natural_language_processing #machine_learning #ai #python
Haystack is an open source NLP framework that leverages Transformer models. It enables developers to implement production-ready neural search, question answering, semantic document search and summarization for a wide range of applications.
👨💻: #Python #Jupyter_Notebook #Dockerfile
⭐: 3457 🍴: 595
#nlp #question_answering #bert #transfer_learning #language_model #pytorch #semantic_search #neural_search #squad #elasticsearch #dpr #information_retrieval #summarization #search_engine #transformers #natural_language_processing #machine_learning #ai #python
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 1610 🍴: 176
#deepspeed_library #gpt_3 #transformers #language_model
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 1610 🍴: 176
#deepspeed_library #gpt_3 #transformers #language_model
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 2929 🍴: 385
#deepspeed_library #gpt_3 #transformers #language_model
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 2929 🍴: 385
#deepspeed_library #gpt_3 #transformers #language_model
LAION-AI/Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
👨💻: #Jupyter_Notebook #Python #TypeScript
⭐: 3514 🍴: 289
#chatgpt #language_model #rlhf #ai #assistant #discord_bot #machine_learning #nextjs #python
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
👨💻: #Jupyter_Notebook #Python #TypeScript
⭐: 3514 🍴: 289
#chatgpt #language_model #rlhf #ai #assistant #discord_bot #machine_learning #nextjs #python
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 3585 🍴: 483
#deepspeed_library #gpt_3 #transformers #language_model
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
👨💻: #Python #Cpp #Cuda
⭐: 3585 🍴: 483
#deepspeed_library #gpt_3 #transformers #language_model
BlinkDL/RWKV-LM
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
👨💻: #Python #Cuda #Cpp
⭐: 1736 🍴: 171
#attention_mechanism #deep_learning #gpt #gpt_2 #gpt_3 #language_model #linear_attention #lstm #pytorch #rnn #transformer #transformers #rwkv #chatgpt
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
👨💻: #Python #Cuda #Cpp
⭐: 1736 🍴: 171
#attention_mechanism #deep_learning #gpt #gpt_2 #gpt_3 #language_model #linear_attention #lstm #pytorch #rnn #transformer #transformers #rwkv #chatgpt
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Cuda #Shell
⭐: 81827 🍴: 18137
#nlp #natural_language_processing #pytorch #language_model #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python #machine_learning #deep_learning
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
👨💻: #Python #Cuda #Shell
⭐: 81827 🍴: 18137
#nlp #natural_language_processing #pytorch #language_model #tensorflow #bert #language_models #pytorch_transformers #nlp_library #transformer #model_hub #pretrained_models #jax #flax #seq2seq #speech_recognition #hacktoberfest #python #machine_learning #deep_learning
microsoft/LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
👨💻: #Python #Shell #Cuda
⭐: 1071 🍴: 45
#nlp #agi #gpt #llm #lm #pretraining #prompt #lmops #promptist #x_prompt #language_model
General technology for enabling AI capabilities w/ LLMs and MLLMs
👨💻: #Python #Shell #Cuda
⭐: 1071 🍴: 45
#nlp #agi #gpt #llm #lm #pretraining #prompt #lmops #promptist #x_prompt #language_model
mlfoundations/open_clip
An open source implementation of CLIP.
👨💻: #Python #Makefile
⭐: 3428 🍴: 369
#deep_learning #pytorch #computer_vision #language_model #multi_modal_learning #contrastive_loss #zero_shot_classification #pretrained_models
An open source implementation of CLIP.
👨💻: #Python #Makefile
⭐: 3428 🍴: 369
#deep_learning #pytorch #computer_vision #language_model #multi_modal_learning #contrastive_loss #zero_shot_classification #pretrained_models