Forwarded from Machine Learning
Deep Learning Basics (lecture notes).pdf
1.1 MB
Forwarded from Machine Learning
notes.pdf
213 KB
Forwarded from Machine Learning
percy-notes.pdf
1.2 MB
This media is not supported in your browser
VIEW IN TELEGRAM
ورژن ۹ مدل YOLO در چهار سایز ۷/۲ میلیون، ۲۰/۱ میلیون و ۲۵/۵ میلیون و ۵۸/۱ میلیون پارامتر منتشر شد!!!
مقاله:
https://arxiv.org/abs/2402.13616
کد:
https://github.com/WongKinYiu/yolov9
مقاله:
https://arxiv.org/abs/2402.13616
کد:
https://github.com/WongKinYiu/yolov9
This media is not supported in your browser
VIEW IN TELEGRAM
🩻 Pose via Ray Diffusion 🩻
👉Novel distributed representation of camera pose that treats a camera as a bundle of rays. Naturally suited for set-level transformers, it's the new SOTA on camera pose estimation. Source code released 💙
👉Review https://t.ly/qBsFK
👉Paper arxiv.org/pdf/2402.14817.pdf
👉Project jasonyzhang.com/RayDiffusion
👉Code github.com/jasonyzhang/RayDiffusion
👉Novel distributed representation of camera pose that treats a camera as a bundle of rays. Naturally suited for set-level transformers, it's the new SOTA on camera pose estimation. Source code released 💙
👉Review https://t.ly/qBsFK
👉Paper arxiv.org/pdf/2402.14817.pdf
👉Project jasonyzhang.com/RayDiffusion
👉Code github.com/jasonyzhang/RayDiffusion
Forwarded from OpenCV | Python
This media is not supported in your browser
VIEW IN TELEGRAM
#هوش_مصنوعی #مقاله #سورس_کد
💎 مدل YOLOv9 منتشر شد.
- سریعتر، دقیقتر و بهینهتر از مدلهای مشابه
📎 دانلود مقاله (pdf)
💻 دانلود سورس کد (Github)
YOLOv9 is out 🔥
📄 We combined the proposed PGI and GELAN, then designed a new generation of YOLO object detection system, which we call YOLOv9.
🔻share with your friends🔻
🔹@OpenCV_olc🔹
💎 مدل YOLOv9 منتشر شد.
- سریعتر، دقیقتر و بهینهتر از مدلهای مشابه
📎 دانلود مقاله (pdf)
💻 دانلود سورس کد (Github)
YOLOv9 is out 🔥
📄 We combined the proposed PGI and GELAN, then designed a new generation of YOLO object detection system, which we call YOLOv9.
🔻share with your friends🔻
🔹@OpenCV_olc🔹
Forwarded from مدرسه پایتون و ریاضی
This media is not supported in your browser
VIEW IN TELEGRAM
کتابخانه لاکپشت در پایتون
Super VIP cheat sheet for Data Scientists.pdf
7.1 MB
برگه تقلب یادگیری عمیق
Forwarded from هوش مصنوعی |یادگیری ماشین| علم داده
This media is not supported in your browser
VIEW IN TELEGRAM
مدل YOLOv9 هم اومد. الان میتونید به صورت لوکال توی browser به صورت تقریبا real time اشیا را تشخیص بدید. نیازی هم به سرور ندارید
🔗 Demo: https://hf.co/spaces/Xenova/yolov9-web
https://blog.roboflow.com/train-yolov9-model/
https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov9-object-detection-on-custom-dataset.ipynb
https://github.com/WongKinYiu/yolov9
https://github.com/roboflow/notebooks
🆔 @Ai_Tv
🔗 Demo: https://hf.co/spaces/Xenova/yolov9-web
https://blog.roboflow.com/train-yolov9-model/
https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov9-object-detection-on-custom-dataset.ipynb
https://github.com/WongKinYiu/yolov9
https://github.com/roboflow/notebooks
🆔 @Ai_Tv
This media is not supported in your browser
VIEW IN TELEGRAM
CyberDemo
Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation
We introduce CyberDemo, a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks. By incorporating extensive data augmentation in a simulated environment, CyberDemo outperforms traditional in-domain real-world demonstrations when transferred to the real world, handling diverse physical and visual conditions. Regardless of its affordability and convenience in data collection, CyberDemo outperforms baseline methods in terms of success rates across various tasks and exhibits generalizability with previously unseen objects. For example, it can rotate novel tetra-valve and penta-valve, despite human demonstrations only involving tri-valves. Our research demonstrates the significant potential of simulated human demonstrations for real-world dexterous manipulation tasks.
paper page: https://huggingface.co/papers/2402.14795
🆔
Augmenting Simulated Human Demonstration for Real-World Dexterous Manipulation
We introduce CyberDemo, a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks. By incorporating extensive data augmentation in a simulated environment, CyberDemo outperforms traditional in-domain real-world demonstrations when transferred to the real world, handling diverse physical and visual conditions. Regardless of its affordability and convenience in data collection, CyberDemo outperforms baseline methods in terms of success rates across various tasks and exhibits generalizability with previously unseen objects. For example, it can rotate novel tetra-valve and penta-valve, despite human demonstrations only involving tri-valves. Our research demonstrates the significant potential of simulated human demonstrations for real-world dexterous manipulation tasks.
paper page: https://huggingface.co/papers/2402.14795
🆔
⭐ MATH-Vision Dataset 🕹
😏 MATH-V is a curated dataset of 3,040 HQ mat problems with visual contexts sourced from real math competitions. Dataset released 📱
😏 Review: https://t.ly/gmIAu
🤨 Paper: arxiv.org/pdf/2402.14804.pdf
🥺 Project: mathvision-cuhk.github.io/
👉 Code: github.com/mathvision-cuhk/MathVision
😏 MATH-V is a curated dataset of 3,040 HQ mat problems with visual contexts sourced from real math competitions. Dataset released 📱
😏 Review: https://t.ly/gmIAu
🤨 Paper: arxiv.org/pdf/2402.14804.pdf
🥺 Project: mathvision-cuhk.github.io/
👉 Code: github.com/mathvision-cuhk/MathVision
Forwarded from Machine Learning
Machine_Learning_Lecture (2).pdf
10.7 MB
Mathematical Foundations of Machine
Learning Lectures on YouTube
Seongjai Kim
Department of Mathematics and Statistics
Mississippi State University
Updated: February 14, 2024
@machine_learning_and_DL
Learning Lectures on YouTube
Seongjai Kim
Department of Mathematics and Statistics
Mississippi State University
Updated: February 14, 2024
@machine_learning_and_DL
Forwarded from School of AI
This media is not supported in your browser
VIEW IN TELEGRAM
مدل بنیادین گوگل برای درک ویدئو منتشر شد!!!
مدل VideoPrism یک ViFM یا مدل بنیادین ویدئوست که برخلاف مدلهای قبلی مثل VideoCLIP برای دامنهی وسیعی از تسکها از جمله classification و localization و retrieval و captioning و question answering قابل استفادهست.
https://blog.research.google/2024/02/videoprism-foundational-visual-encoder.html
مدل VideoPrism یک ViFM یا مدل بنیادین ویدئوست که برخلاف مدلهای قبلی مثل VideoCLIP برای دامنهی وسیعی از تسکها از جمله classification و localization و retrieval و captioning و question answering قابل استفادهست.
https://blog.research.google/2024/02/videoprism-foundational-visual-encoder.html
Result.gif
23.1 MB
🌟 Discover 6DRepNet: The Ultimate Head Pose Estimation Model!
Features:
* State-of-the-art accuracy
* Comprehensive tools for training, testing, and inference
* Easy setup with conda
* Supports multiple datasets
Watch the performance showcase on GitHub for future advancements.
[Source Code] [Paper]
Features:
* State-of-the-art accuracy
* Comprehensive tools for training, testing, and inference
* Easy setup with conda
* Supports multiple datasets
Watch the performance showcase on GitHub for future advancements.
[Source Code] [Paper]
Forwarded from مدرسه پایتون و ریاضی
This media is not supported in your browser
VIEW IN TELEGRAM
کتابخانه لاکپشت در پایتون