maraoz/gpt-scrolls
A collaborative collection of open-source safe GPT-3 prompts that work well
#generator #gpt_3 #language_model #openai #safety #transformer
Stars: 123 Issues: 4 Forks: 7
https://github.com/maraoz/gpt-scrolls
A collaborative collection of open-source safe GPT-3 prompts that work well
#generator #gpt_3 #language_model #openai #safety #transformer
Stars: 123 Issues: 4 Forks: 7
https://github.com/maraoz/gpt-scrolls
GitHub
GitHub - maraoz/gpt-scrolls: A collaborative collection of open-source safe GPT-3 prompts that work well
A collaborative collection of open-source safe GPT-3 prompts that work well - GitHub - maraoz/gpt-scrolls: A collaborative collection of open-source safe GPT-3 prompts that work well
PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
GitHub
GitHub - PKU-Alignment/safe-rlhf: Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback - PKU-Alignment/safe-rlhf