xingyizhou/CenterNet2
Two-stage CenterNet
Language: Python
#coco #object_detection
Stars: 285 Issues: 4 Forks: 27
https://github.com/xingyizhou/CenterNet2
Two-stage CenterNet
Language: Python
#coco #object_detection
Stars: 285 Issues: 4 Forks: 27
https://github.com/xingyizhou/CenterNet2
GitHub
GitHub - xingyizhou/CenterNet2: Two-stage CenterNet
Two-stage CenterNet. Contribute to xingyizhou/CenterNet2 development by creating an account on GitHub.
hustvl/YOLOS
You Only Look at One Sequence (https://arxiv.org/abs/2106.00666)
Language: Python
#computer_vision #transformer #object_detection #vision_transformer
Stars: 128 Issues: 0 Forks: 4
https://github.com/hustvl/YOLOS
You Only Look at One Sequence (https://arxiv.org/abs/2106.00666)
Language: Python
#computer_vision #transformer #object_detection #vision_transformer
Stars: 128 Issues: 0 Forks: 4
https://github.com/hustvl/YOLOS
GitHub
GitHub - hustvl/YOLOS: [NeurIPS 2021] You Only Look at One Sequence
[NeurIPS 2021] You Only Look at One Sequence. Contribute to hustvl/YOLOS development by creating an account on GitHub.
fcjian/TOOD
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral
Language: Python
#anchor_based #anchor_free #computer_vision #dense_object_detection #iccv2021 #iccv21 #object_detection #one_stage_detector #sample_assignment #t_head #tal #task_aligned_loss #task_alignment #task_alignment_metric #tood
Stars: 84 Issues: 1 Forks: 6
https://github.com/fcjian/TOOD
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral
Language: Python
#anchor_based #anchor_free #computer_vision #dense_object_detection #iccv2021 #iccv21 #object_detection #one_stage_detector #sample_assignment #t_head #tal #task_aligned_loss #task_alignment #task_alignment_metric #tood
Stars: 84 Issues: 1 Forks: 6
https://github.com/fcjian/TOOD
GitHub
GitHub - fcjian/TOOD: TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral
TOOD: Task-aligned One-stage Object Detection, ICCV2021 Oral - fcjian/TOOD
hustvl/YOLOP
You Only Look Once for panopitic driving perception.(https://arxiv.org/abs/2108.11250)
Language: Python
#drivable_area_segmentation #jetson_tx2 #lane_detection #multitask_learning #object_detection
Stars: 119 Issues: 1 Forks: 13
https://github.com/hustvl/YOLOP
You Only Look Once for panopitic driving perception.(https://arxiv.org/abs/2108.11250)
Language: Python
#drivable_area_segmentation #jetson_tx2 #lane_detection #multitask_learning #object_detection
Stars: 119 Issues: 1 Forks: 13
https://github.com/hustvl/YOLOP
czczup/ViT-Adapter
Vision Transformer Adapter for Dense Predictions
#adapter #object_detection #semantic_segmentation #vision_transformer
Stars: 89 Issues: 1 Forks: 3
https://github.com/czczup/ViT-Adapter
Vision Transformer Adapter for Dense Predictions
#adapter #object_detection #semantic_segmentation #vision_transformer
Stars: 89 Issues: 1 Forks: 3
https://github.com/czczup/ViT-Adapter
GitHub
GitHub - czczup/ViT-Adapter: [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions - czczup/ViT-Adapter
dog-qiuqiu/FastestDet
:zap: A newly designed ultra lightweight anchor free target detection algorithm, weight only 250K parameters, reduces the time consumption by 30% compared with yolo-fastest, and the post-processing is simpler
Language: Python
#computer_vision #deep_learning #object_detection
Stars: 120 Issues: 4 Forks: 15
https://github.com/dog-qiuqiu/FastestDet
:zap: A newly designed ultra lightweight anchor free target detection algorithm, weight only 250K parameters, reduces the time consumption by 30% compared with yolo-fastest, and the post-processing is simpler
Language: Python
#computer_vision #deep_learning #object_detection
Stars: 120 Issues: 4 Forks: 15
https://github.com/dog-qiuqiu/FastestDet
GitHub
GitHub - dog-qiuqiu/FastestDet: :zap: A newly designed ultra lightweight anchor free target detection algorithm, weight only 250K…
:zap: A newly designed ultra lightweight anchor free target detection algorithm, weight only 250K parameters, reduces the time consumption by 10% compared with yolo-fastest, and the post-processing...
wjf5203/VNext
Next-generation Video instance recognition framework on top of Detectron2 which supports SeqFormer(ECCV Oral) and IDOL(ECCV Oral))
Language: Python
#instance_segmentation #object_detection #tracking #transformer #video_instance_segmentation
Stars: 109 Issues: 0 Forks: 4
https://github.com/wjf5203/VNext
Next-generation Video instance recognition framework on top of Detectron2 which supports SeqFormer(ECCV Oral) and IDOL(ECCV Oral))
Language: Python
#instance_segmentation #object_detection #tracking #transformer #video_instance_segmentation
Stars: 109 Issues: 0 Forks: 4
https://github.com/wjf5203/VNext
GitHub
GitHub - wjf5203/VNext: Next-generation Video instance recognition framework on top of Detectron2 which supports InstMove (CVPR…
Next-generation Video instance recognition framework on top of Detectron2 which supports InstMove (CVPR 2023), SeqFormer(ECCV Oral), and IDOL(ECCV Oral)) - wjf5203/VNext
open-mmlab/mmyolo
OpenMMLab YOLO series toolbox and benchmark
Language: Python
#object_detection #pytorch #yolo #yolov5 #yolov6 #yolox
Stars: 285 Issues: 7 Forks: 11
https://github.com/open-mmlab/mmyolo
OpenMMLab YOLO series toolbox and benchmark
Language: Python
#object_detection #pytorch #yolo #yolov5 #yolov6 #yolox
Stars: 285 Issues: 7 Forks: 11
https://github.com/open-mmlab/mmyolo
GitHub
GitHub - open-mmlab/mmyolo: OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7…
OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc. - open-mmlab/mmyolo
roboflow-ai/notebooks
Set of Jupyter Notebooks linked to Roboflow Blogpost and used in our YouTube videos.
Language: Jupyter Notebook
#computer_vision #deep_learning #deep_neural_networks #image_classification #image_segmentation #object_detection #pytorch #tutorial #yolov5 #yolov6 #yolov7
Stars: 126 Issues: 1 Forks: 14
https://github.com/roboflow-ai/notebooks
Set of Jupyter Notebooks linked to Roboflow Blogpost and used in our YouTube videos.
Language: Jupyter Notebook
#computer_vision #deep_learning #deep_neural_networks #image_classification #image_segmentation #object_detection #pytorch #tutorial #yolov5 #yolov6 #yolov7
Stars: 126 Issues: 1 Forks: 14
https://github.com/roboflow-ai/notebooks
GitHub
GitHub - roboflow/notebooks: Examples and tutorials on using SOTA computer vision models and techniques. Learn everything from…
Examples and tutorials on using SOTA computer vision models and techniques. Learn everything from old-school ResNet, through YOLO and object-detection transformers like DETR, to the latest models l...
tinyvision/DAMO-YOLO
DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
Language: Python
#deep_learning #nas #object_detection #onnx #pytorch #tensorrt #yolo #yolov5
Stars: 163 Issues: 8 Forks: 15
https://github.com/tinyvision/DAMO-YOLO
DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
Language: Python
#deep_learning #nas #object_detection #onnx #pytorch #tensorrt #yolo #yolov5
Stars: 163 Issues: 8 Forks: 15
https://github.com/tinyvision/DAMO-YOLO
GitHub
GitHub - tinyvision/DAMO-YOLO: DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones…
DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement. - tinyvision/DAMO-YOLO
kadirnar/segment-anything-video
MetaSeg: Packaged version of the Segment Anything repository
Language: Python
#object_detection #object_segmentation #segment_anything #segmentation
Stars: 337 Issues: 4 Forks: 22
https://github.com/kadirnar/segment-anything-video
MetaSeg: Packaged version of the Segment Anything repository
Language: Python
#object_detection #object_segmentation #segment_anything #segmentation
Stars: 337 Issues: 4 Forks: 22
https://github.com/kadirnar/segment-anything-video
GitHub
GitHub - kadirnar/segment-anything-video: MetaSeg: Packaged version of the Segment Anything repository
MetaSeg: Packaged version of the Segment Anything repository - kadirnar/segment-anything-video
OpenGVLab/VisionLLM
VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
#generalist_model #large_language_models #object_detection
Stars: 205 Issues: 1 Forks: 2
https://github.com/OpenGVLab/VisionLLM
VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks
#generalist_model #large_language_models #object_detection
Stars: 205 Issues: 1 Forks: 2
https://github.com/OpenGVLab/VisionLLM
GitHub
GitHub - OpenGVLab/VisionLLM: VisionLLM Series
VisionLLM Series. Contribute to OpenGVLab/VisionLLM development by creating an account on GitHub.
roboflow/multimodal-maestro
Effective prompting for Large Multimodal Models like GPT-4 Vision or LLaVA. 🔥
Language: Python
#cross_modal #gpt_4 #gpt_4_vision #instance_segmentation #llava #lmm #multimodality #object_detection #prompt_engineering #segment_anything #vision_language_model #visual_prompting
Stars: 367 Issues: 1 Forks: 23
https://github.com/roboflow/multimodal-maestro
Effective prompting for Large Multimodal Models like GPT-4 Vision or LLaVA. 🔥
Language: Python
#cross_modal #gpt_4 #gpt_4_vision #instance_segmentation #llava #lmm #multimodality #object_detection #prompt_engineering #segment_anything #vision_language_model #visual_prompting
Stars: 367 Issues: 1 Forks: 23
https://github.com/roboflow/multimodal-maestro
GitHub
GitHub - roboflow/maestro: streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL
streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL - roboflow/maestro
FoundationVision/GLEE
GLEE: General Object Foundation Model for Images and Videos at Scale
Language: Python
#foundation_model #object_detection #open_world #tracking
Stars: 153 Issues: 3 Forks: 9
https://github.com/FoundationVision/GLEE
GLEE: General Object Foundation Model for Images and Videos at Scale
Language: Python
#foundation_model #object_detection #open_world #tracking
Stars: 153 Issues: 3 Forks: 9
https://github.com/FoundationVision/GLEE
GitHub
GitHub - FoundationVision/GLEE: [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale
[CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale - FoundationVision/GLEE
IDEA-Research/Grounding-DINO-1.5-API
API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
Language: Python
#grounding_dino #object_detection #open_set
Stars: 228 Issues: 7 Forks: 7
https://github.com/IDEA-Research/Grounding-DINO-1.5-API
API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series
Language: Python
#grounding_dino #object_detection #open_set
Stars: 228 Issues: 7 Forks: 7
https://github.com/IDEA-Research/Grounding-DINO-1.5-API
GitHub
GitHub - IDEA-Research/Grounding-DINO-1.5-API: API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection…
API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series - IDEA-Research/Grounding-DINO-1.5-API