Awesome news for beginners in #MachineLearning and #DeepLearning
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#beginner #datascience #learning #machinelearning
@kdnuggets @datasciencechats
Source: Linkedin - Tarry Singh
We've all come to love Dr. Strang's Linear Algebra Lectures from MIT. But his books are sometimes expensive for students and also not available.
Now Stanford University changed all that with their free book they released called "Introduction to Applied Linear Algebra" written by Stephen Boyd and Lieven Vandenberghe
Go get them all here on my #Github page, I will create some beginners lectures and #Python & #Julia notebooks there soon.
Root / main folder: https://lnkd.in/de8uepd
1. The 473 page book itself: https://bit.ly/2tjFNdA
2. Lovely Julia language companion book worth 170 pages! : https://bit.ly/2BxYGy0
3. Exercises book: https://bit.ly/2RZoVTf
4, Course lecture slides: https://bit.ly/2N9TZPC
#beginner #datascience #learning #machinelearning
@kdnuggets @datasciencechats
Source: Linkedin - Tarry Singh
GitHub
Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials/Beginners_Guide_Math_LinAlg/Applied Linear Algebra at master ยทโฆ
A comprehensive list of Deep Learning / Artificial Intelligence and Machine Learning tutorials - rapidly expanding into areas of AI/Deep Learning / Machine Vision / NLP and industry specific areas ...
Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
โญ Code: https://github.com/Dana-Farber-AIOS/pathml
โญ Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
This media is not supported in your browser
VIEW IN TELEGRAM
๐๐ Magic-Me: Identity-Specific Video ๐๐
๐hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 ๐
๐๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ:
โ Novel Video Custom Diffusion (VCD) framework
โ High-Quality ID-specific videos generation
โ Improvement in aligning IDs-images and text
โ Robust 3D Gaussian Noise Prior for denoising
โ Better Inter-frame correlation / video consistency
โ New modules F-VCD/T-VCD for videos upscale
โ New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
๐Channel: @deeplearning_ai
๐Paper https://arxiv.org/pdf/2402.09368.pdf
๐Project https://magic-me-webpage.github.io/
๐Code https://github.com/Zhen-Dong/Magic-Me
๐hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 ๐
๐๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ:
โ Novel Video Custom Diffusion (VCD) framework
โ High-Quality ID-specific videos generation
โ Improvement in aligning IDs-images and text
โ Robust 3D Gaussian Noise Prior for denoising
โ Better Inter-frame correlation / video consistency
โ New modules F-VCD/T-VCD for videos upscale
โ New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
๐Channel: @deeplearning_ai
๐Paper https://arxiv.org/pdf/2402.09368.pdf
๐Project https://magic-me-webpage.github.io/
๐Code https://github.com/Zhen-Dong/Magic-Me
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing ECoDepth: The New Benchmark in Diffusive Mono-Depth
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
โ Revolutionary MDE approach tailored for SIDE tasks
โ Enhanced semantic context via ViT embeddings
โ Superior performance in zero-shot transfer tasks
โ Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
๐ Read the Paper
๐ป Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
๐ @deeplearning_ai
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
โ Revolutionary MDE approach tailored for SIDE tasks
โ Enhanced semantic context via ViT embeddings
โ Superior performance in zero-shot transfer tasks
โ Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
๐ Read the Paper
๐ป Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
๐ @deeplearning_ai
Media is too big
VIEW IN TELEGRAM
Neural Bodies with Clothes: Overview
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
โ Novel approach for modeling clothed human figures.
โ Unified framework accommodating various clothing types.
โ Consistent representation of both body and clothing.
โ Enables seamless modification of identity, shape, clothing, and pose.
โ Extensive dataset with detailed clothing information.
Explore More:
๐ปProject Details: Discover More
๐Read the Paper: Access Here
๐ปSource Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
๐ @deeplearning_ai
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
โ Novel approach for modeling clothed human figures.
โ Unified framework accommodating various clothing types.
โ Consistent representation of both body and clothing.
โ Enables seamless modification of identity, shape, clothing, and pose.
โ Extensive dataset with detailed clothing information.
Explore More:
๐ปProject Details: Discover More
๐Read the Paper: Access Here
๐ปSource Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
๐ @deeplearning_ai
demo.gif
15 MB
๐ Explore SCRFD: High-Efficiency, High-Accuracy Face Detection ๐
Unlock next-level face detection capabilities with SCRFD โ efficiency and accuracy in one solution!
๐ Performance at a Glance:
โ Model range: SCRFD_500M to SCRFD_34G
โ Accuracy up to 96.06%
โ Inference as fast as 3.6 ms
๐ Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
โ https://t.me/deeplearning_ai
Unlock next-level face detection capabilities with SCRFD โ efficiency and accuracy in one solution!
๐ Performance at a Glance:
โ Model range: SCRFD_500M to SCRFD_34G
โ Accuracy up to 96.06%
โ Inference as fast as 3.6 ms
๐ Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
โ https://t.me/deeplearning_ai
This media is not supported in your browser
VIEW IN TELEGRAM
๐ Discover the Power of Fine-Grained Gaze Estimation with L2CS-Net! ๐
๐ Key Features:
โ Advanced Architecture: Built using state-of-the-art neural network structures.
โ Versatile Utilities: Packed with utility functions and classes for seamless integration.
โ Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
โ Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
๐ Live Demo:
Visualize the power of L2CS-Net with your own video:
๐ Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
๐ GitHub Repository
Let's advance gaze estimation together! ๐๐ #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
๐ Key Features:
โ Advanced Architecture: Built using state-of-the-art neural network structures.
โ Versatile Utilities: Packed with utility functions and classes for seamless integration.
โ Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
โ Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
๐ Live Demo:
Visualize the power of L2CS-Net with your own video:
๐ Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
๐ GitHub Repository
Let's advance gaze estimation together! ๐๐ #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
๐ Exciting AI Breakthrough! Meet U^2-Net! ๐
๐ Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
๐ Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
๐ก Ready to Transform Projects
โจ Give our repo a โญ and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
๐ปSource Code: Explore GitHub Repo
Happy Learning! ๐
๐ Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
๐ Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
๐ก Ready to Transform Projects
โจ Give our repo a โญ and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
๐ปSource Code: Explore GitHub Repo
Happy Learning! ๐
๐ 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! ๐
Key Features:
๐น Advanced Neural Network: Built on the robust U2-Net architecture.
๐น Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
๐น Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
๐ปSource Code: https://github.com/Shohruh72/3DGazeNet
๐Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
Key Features:
๐น Advanced Neural Network: Built on the robust U2-Net architecture.
๐น Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
๐น Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
๐ปSource Code: https://github.com/Shohruh72/3DGazeNet
๐Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
This media is not supported in your browser
VIEW IN TELEGRAM
๐ Introducing L2CS-Net: Fine-Grained Gaze Estimation ๐โจ
๐ GitHub Repo: Star โญ the Repo
๐ฅ Key Features:
โ Fine-grained gaze estimation with deep learning
โ Supports Gaze360 dataset
โ Train with Single-GPU / Multi-GPU
โ Demo for real-time visualization
๐ Quick Start:
๐๏ธ Prepare dataset
๐๏ธ Train (
๐ฅ Video Infernece (
๐ Support Open Source! Star โญ & Share!
๐ GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource ๐
๐ GitHub Repo: Star โญ the Repo
๐ฅ Key Features:
โ Fine-grained gaze estimation with deep learning
โ Supports Gaze360 dataset
โ Train with Single-GPU / Multi-GPU
โ Demo for real-time visualization
๐ Quick Start:
๐๏ธ Prepare dataset
๐๏ธ Train (
python main.py --train
) ๐ฅ Video Infernece (
python main.py --demo
)๐ Support Open Source! Star โญ & Share!
๐ GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource ๐