Awesome news for beginners in #MachineLearning and #DeepLearning
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#beginner #datascience #learning #machinelearning
@kdnuggets @datasciencechats
Source: Linkedin - Tarry Singh
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#beginner #datascience #learning #machinelearning
@kdnuggets @datasciencechats
Source: Linkedin - Tarry Singh
GitHub
Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials/Beginners_Guide_Math_LinAlg/Applied Linear Algebra at master ·…
A comprehensive list of Deep Learning / Artificial Intelligence and Machine Learning tutorials - rapidly expanding into areas of AI/Deep Learning / Machine Vision / NLP and industry specific areas ...
❤1
Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
👎53👍34🔥10😁2
This media is not supported in your browser
VIEW IN TELEGRAM
🆔🆔 Magic-Me: Identity-Specific Video 🆔🆔
👉hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 💙
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel Video Custom Diffusion (VCD) framework
✅High-Quality ID-specific videos generation
✅Improvement in aligning IDs-images and text
✅Robust 3D Gaussian Noise Prior for denoising
✅Better Inter-frame correlation / video consistency
✅New modules F-VCD/T-VCD for videos upscale
✅New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @deeplearning_ai
👉Paper https://arxiv.org/pdf/2402.09368.pdf
👉Project https://magic-me-webpage.github.io/
👉Code https://github.com/Zhen-Dong/Magic-Me
👉hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 💙
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel Video Custom Diffusion (VCD) framework
✅High-Quality ID-specific videos generation
✅Improvement in aligning IDs-images and text
✅Robust 3D Gaussian Noise Prior for denoising
✅Better Inter-frame correlation / video consistency
✅New modules F-VCD/T-VCD for videos upscale
✅New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @deeplearning_ai
👉Paper https://arxiv.org/pdf/2402.09368.pdf
👉Project https://magic-me-webpage.github.io/
👉Code https://github.com/Zhen-Dong/Magic-Me
👍23❤5
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing ECoDepth: The New Benchmark in Diffusive Mono-Depth
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
✅Revolutionary MDE approach tailored for SIDE tasks
✅Enhanced semantic context via ViT embeddings
✅Superior performance in zero-shot transfer tasks
✅Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
📖 Read the Paper
💻 Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
👉 @deeplearning_ai
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
✅Revolutionary MDE approach tailored for SIDE tasks
✅Enhanced semantic context via ViT embeddings
✅Superior performance in zero-shot transfer tasks
✅Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
📖 Read the Paper
💻 Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
👉 @deeplearning_ai
👍16❤2
Media is too big
VIEW IN TELEGRAM
Neural Bodies with Clothes: Overview
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
✅Novel approach for modeling clothed human figures.
✅Unified framework accommodating various clothing types.
✅Consistent representation of both body and clothing.
✅Enables seamless modification of identity, shape, clothing, and pose.
✅Extensive dataset with detailed clothing information.
Explore More:
💻Project Details: Discover More
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
👉 @deeplearning_ai
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
✅Novel approach for modeling clothed human figures.
✅Unified framework accommodating various clothing types.
✅Consistent representation of both body and clothing.
✅Enables seamless modification of identity, shape, clothing, and pose.
✅Extensive dataset with detailed clothing information.
Explore More:
💻Project Details: Discover More
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
👉 @deeplearning_ai
👍12🔥7❤6
demo.gif
15 MB
🚀 Explore SCRFD: High-Efficiency, High-Accuracy Face Detection 🚀
Unlock next-level face detection capabilities with SCRFD – efficiency and accuracy in one solution!
📈 Performance at a Glance:
✅Model range: SCRFD_500M to SCRFD_34G
✅Accuracy up to 96.06%
✅Inference as fast as 3.6 ms
🔍 Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
✅ https://t.me/deeplearning_ai
Unlock next-level face detection capabilities with SCRFD – efficiency and accuracy in one solution!
📈 Performance at a Glance:
✅Model range: SCRFD_500M to SCRFD_34G
✅Accuracy up to 96.06%
✅Inference as fast as 3.6 ms
🔍 Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
✅ https://t.me/deeplearning_ai
👍16❤5🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🔍 Discover the Power of Fine-Grained Gaze Estimation with L2CS-Net! 🌟
🚀 Key Features:
✅Advanced Architecture: Built using state-of-the-art neural network structures.
✅Versatile Utilities: Packed with utility functions and classes for seamless integration.
✅Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
✅Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
👀 Live Demo:
Visualize the power of L2CS-Net with your own video:
🌟 Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
🔗 GitHub Repository
Let's advance gaze estimation together! 🚀🌐 #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
🚀 Key Features:
✅Advanced Architecture: Built using state-of-the-art neural network structures.
✅Versatile Utilities: Packed with utility functions and classes for seamless integration.
✅Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
✅Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
👀 Live Demo:
Visualize the power of L2CS-Net with your own video:
🌟 Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
🔗 GitHub Repository
Let's advance gaze estimation together! 🚀🌐 #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
👍14❤5🤩1
🌟 Exciting AI Breakthrough! Meet U^2-Net! 🌟
🌟 Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
🚀 Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
💡 Ready to Transform Projects
✨ Give our repo a ⭐ and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
💻Source Code: Explore GitHub Repo
Happy Learning! 🌟
🌟 Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
🚀 Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
💡 Ready to Transform Projects
✨ Give our repo a ⭐ and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
💻Source Code: Explore GitHub Repo
Happy Learning! 🌟
🔥11👍8❤1
🚀 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! 🌟
Key Features:
🔹 Advanced Neural Network: Built on the robust U2-Net architecture.
🔹 Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
🔹 Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
💻Source Code: https://github.com/Shohruh72/3DGazeNet
📖Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
Key Features:
🔹 Advanced Neural Network: Built on the robust U2-Net architecture.
🔹 Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
🔹 Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
💻Source Code: https://github.com/Shohruh72/3DGazeNet
📖Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
👍7🤩3❤2🔥2
This media is not supported in your browser
VIEW IN TELEGRAM
🚀 Introducing L2CS-Net: Fine-Grained Gaze Estimation 👀✨
🔗 GitHub Repo: Star ⭐ the Repo
🔥 Key Features:
✅ Fine-grained gaze estimation with deep learning
✅ Supports Gaze360 dataset
✅ Train with Single-GPU / Multi-GPU
✅ Demo for real-time visualization
📌 Quick Start:
🗂️ Prepare dataset
🏋️ Train (
🎥 Video Infernece (
🌟 Support Open Source! Star ⭐ & Share!
🔗 GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource 🚀
🔗 GitHub Repo: Star ⭐ the Repo
🔥 Key Features:
✅ Fine-grained gaze estimation with deep learning
✅ Supports Gaze360 dataset
✅ Train with Single-GPU / Multi-GPU
✅ Demo for real-time visualization
📌 Quick Start:
🗂️ Prepare dataset
🏋️ Train (
python main.py --train
) 🎥 Video Infernece (
python main.py --demo
)🌟 Support Open Source! Star ⭐ & Share!
🔗 GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource 🚀
👍17❤2🔥2🤩1
🚀 Refactored HRNet Now Live! 🚀
🔥 Supercharge your computer vision projects with high-resolution HRNet models – fully refactored for easy training/testing!
✅ Multiple ImageNet-pretrained models
✅ Lightning-fast setup
✅ Top-tier accuracy
👉 Check it out & ⭐️ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
@deeplearning_ai
🔥 Supercharge your computer vision projects with high-resolution HRNet models – fully refactored for easy training/testing!
✅ Multiple ImageNet-pretrained models
✅ Lightning-fast setup
✅ Top-tier accuracy
👉 Check it out & ⭐️ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
@deeplearning_ai
❤6👍4👎4🔥4🤩2
This media is not supported in your browser
VIEW IN TELEGRAM
🚀 3DGazeNet: Next-Gen Gaze Estimation! - get instant results with just one click.
Discover how to train powerful gaze estimation models using only synthetic data and weak supervision—no huge real-world datasets needed.
Perfect for AR/VR, HCI, and beyond.
Cutting-edge, open-source, and ready for your next project!
👉 Try it now: https://github.com/Shohruh72/3DGazeNet
#DeepLearning #GazeEstimation #AI
@deeplearning_ai
Discover how to train powerful gaze estimation models using only synthetic data and weak supervision—no huge real-world datasets needed.
Perfect for AR/VR, HCI, and beyond.
Cutting-edge, open-source, and ready for your next project!
👉 Try it now: https://github.com/Shohruh72/3DGazeNet
#DeepLearning #GazeEstimation #AI
@deeplearning_ai
❤8🔥3👍2