jsbroks/awesome-dataset-tools
🔧 A curated list of awesome dataset tools
#annotation_tool #annotations #awsome #awsome_list #datasets #machine_learning
Stars: 94 Issues: 0 Forks: 13
https://github.com/jsbroks/awesome-dataset-tools
🔧 A curated list of awesome dataset tools
#annotation_tool #annotations #awsome #awsome_list #datasets #machine_learning
Stars: 94 Issues: 0 Forks: 13
https://github.com/jsbroks/awesome-dataset-tools
GitHub
GitHub - jsbroks/awesome-dataset-tools: 🔧 A curated list of awesome dataset tools
🔧 A curated list of awesome dataset tools. Contribute to jsbroks/awesome-dataset-tools development by creating an account on GitHub.
chineseGLUE/chineseGLUE
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Language: Python
#albert #bert #chinese_corpus #datasets #glue #language_understanding #nlp #pre_trained_model
Stars: 109 Issues: 1 Forks: 10
https://github.com/chineseGLUE/chineseGLUE
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard
Language: Python
#albert #bert #chinese_corpus #datasets #glue #language_understanding #nlp #pre_trained_model
Stars: 109 Issues: 1 Forks: 10
https://github.com/chineseGLUE/chineseGLUE
GitHub
GitHub - ChineseGLUE/ChineseGLUE: Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models…
Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard - GitHub - ChineseGLUE/ChineseGLUE: Language Understanding Evaluation benchma...
PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
GitHub
GitHub - PKU-Alignment/safe-rlhf: Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback - PKU-Alignment/safe-rlhf