OpenGVLab/InternChat
InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device.
Language: Python
#chatgpt #click #foundation_model #gpt #gpt_4 #gradio #husky #image_captioning #internimage #langchain #llama #llm #multimodal #ocr #sam #segment_anything #vicuna #video #video_generation #vqa
Stars: 231 Issues: 1 Forks: 10
https://github.com/OpenGVLab/InternChat
InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device.
Language: Python
#chatgpt #click #foundation_model #gpt #gpt_4 #gradio #husky #image_captioning #internimage #langchain #llama #llm #multimodal #ocr #sam #segment_anything #vicuna #video #video_generation #vqa
Stars: 231 Issues: 1 Forks: 10
https://github.com/OpenGVLab/InternChat
GitHub
GitHub - OpenGVLab/InternGPT: InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now…
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editin...
PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
Safe-RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Language: Python
#ai_safety #alpaca #datasets #deepspeed #large_language_models #llama #llm #llms #reinforcement_learning #reinforcement_learning_from_human_feedback #rlhf #safe_reinforcement_learning #safe_reinforcement_learning_from_human_feedback #safe_rlhf #safety #transformers #vicuna
Stars: 279 Issues: 0 Forks: 14
https://github.com/PKU-Alignment/safe-rlhf
GitHub
GitHub - PKU-Alignment/safe-rlhf: Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback - PKU-Alignment/safe-rlhf