Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
β Code: https://github.com/Dana-Farber-AIOS/pathml
β Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
π53π34π₯10π2
This media is not supported in your browser
VIEW IN TELEGRAM
ππ Magic-Me: Identity-Specific Video ππ
πhashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 π
ππ’π π‘π₯π’π π‘ππ¬:
β Novel Video Custom Diffusion (VCD) framework
β High-Quality ID-specific videos generation
β Improvement in aligning IDs-images and text
β Robust 3D Gaussian Noise Prior for denoising
β Better Inter-frame correlation / video consistency
β New modules F-VCD/T-VCD for videos upscale
β New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
πChannel: @deeplearning_ai
πPaper https://arxiv.org/pdf/2402.09368.pdf
πProject https://magic-me-webpage.github.io/
πCode https://github.com/Zhen-Dong/Magic-Me
πhashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 π
ππ’π π‘π₯π’π π‘ππ¬:
β Novel Video Custom Diffusion (VCD) framework
β High-Quality ID-specific videos generation
β Improvement in aligning IDs-images and text
β Robust 3D Gaussian Noise Prior for denoising
β Better Inter-frame correlation / video consistency
β New modules F-VCD/T-VCD for videos upscale
β New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
πChannel: @deeplearning_ai
πPaper https://arxiv.org/pdf/2402.09368.pdf
πProject https://magic-me-webpage.github.io/
πCode https://github.com/Zhen-Dong/Magic-Me
π23β€5
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing ECoDepth: The New Benchmark in Diffusive Mono-Depth
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
β Revolutionary MDE approach tailored for SIDE tasks
β Enhanced semantic context via ViT embeddings
β Superior performance in zero-shot transfer tasks
β Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
π Read the Paper
π» Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
π @deeplearning_ai
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
β Revolutionary MDE approach tailored for SIDE tasks
β Enhanced semantic context via ViT embeddings
β Superior performance in zero-shot transfer tasks
β Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
π Read the Paper
π» Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
π @deeplearning_ai
π16β€2
Media is too big
VIEW IN TELEGRAM
Neural Bodies with Clothes: Overview
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
β Novel approach for modeling clothed human figures.
β Unified framework accommodating various clothing types.
β Consistent representation of both body and clothing.
β Enables seamless modification of identity, shape, clothing, and pose.
β Extensive dataset with detailed clothing information.
Explore More:
π»Project Details: Discover More
πRead the Paper: Access Here
π»Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
π @deeplearning_ai
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
β Novel approach for modeling clothed human figures.
β Unified framework accommodating various clothing types.
β Consistent representation of both body and clothing.
β Enables seamless modification of identity, shape, clothing, and pose.
β Extensive dataset with detailed clothing information.
Explore More:
π»Project Details: Discover More
πRead the Paper: Access Here
π»Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
π @deeplearning_ai
π12π₯7β€6
demo.gif
15 MB
π Explore SCRFD: High-Efficiency, High-Accuracy Face Detection π
Unlock next-level face detection capabilities with SCRFD β efficiency and accuracy in one solution!
π Performance at a Glance:
β Model range: SCRFD_500M to SCRFD_34G
β Accuracy up to 96.06%
β Inference as fast as 3.6 ms
π Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
β https://t.me/deeplearning_ai
Unlock next-level face detection capabilities with SCRFD β efficiency and accuracy in one solution!
π Performance at a Glance:
β Model range: SCRFD_500M to SCRFD_34G
β Accuracy up to 96.06%
β Inference as fast as 3.6 ms
π Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper
#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning
β https://t.me/deeplearning_ai
π16β€5π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
π Discover the Power of Fine-Grained Gaze Estimation with L2CS-Net! π
π Key Features:
β Advanced Architecture: Built using state-of-the-art neural network structures.
β Versatile Utilities: Packed with utility functions and classes for seamless integration.
β Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
β Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
π Live Demo:
Visualize the power of L2CS-Net with your own video:
π Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
π GitHub Repository
Let's advance gaze estimation together! ππ #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
π Key Features:
β Advanced Architecture: Built using state-of-the-art neural network structures.
β Versatile Utilities: Packed with utility functions and classes for seamless integration.
β Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
β Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.
π Live Demo:
Visualize the power of L2CS-Net with your own video:
π Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!
π GitHub Repository
Let's advance gaze estimation together! ππ #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
π14β€5π€©1
π Exciting AI Breakthrough! Meet U^2-Net! π
π Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
π Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
π‘ Ready to Transform Projects
β¨ Give our repo a β and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
π»Source Code: Explore GitHub Repo
Happy Learning! π
π Why U^2-Net?
* Efficient
* AdvancedArchitecture.
* High-Resolution Outputs
π Key Applications:
* Salient Object Detection
* Background Removal
* Medical Imaging
π‘ Ready to Transform Projects
β¨ Give our repo a β and show your support!
#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub
π»Source Code: Explore GitHub Repo
Happy Learning! π
π₯11π8β€1
π 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! π
Key Features:
πΉ Advanced Neural Network: Built on the robust U2-Net architecture.
πΉ Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
πΉ Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
π»Source Code: https://github.com/Shohruh72/3DGazeNet
πRead the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
Key Features:
πΉ Advanced Neural Network: Built on the robust U2-Net architecture.
πΉ Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
πΉ Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.
Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.
π»Source Code: https://github.com/Shohruh72/3DGazeNet
πRead the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
π7π€©3β€2π₯2
This media is not supported in your browser
VIEW IN TELEGRAM
π Introducing L2CS-Net: Fine-Grained Gaze Estimation πβ¨
π GitHub Repo: Star β the Repo
π₯ Key Features:
β Fine-grained gaze estimation with deep learning
β Supports Gaze360 dataset
β Train with Single-GPU / Multi-GPU
β Demo for real-time visualization
π Quick Start:
ποΈ Prepare dataset
ποΈ Train (
π₯ Video Infernece (
π Support Open Source! Star β & Share!
π GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource π
π GitHub Repo: Star β the Repo
π₯ Key Features:
β Fine-grained gaze estimation with deep learning
β Supports Gaze360 dataset
β Train with Single-GPU / Multi-GPU
β Demo for real-time visualization
π Quick Start:
ποΈ Prepare dataset
ποΈ Train (
python main.py --train
) π₯ Video Infernece (
python main.py --demo
)π Support Open Source! Star β & Share!
π GitHub Repo: L2CSNet
#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource π
π17β€4π₯2π€©1
π Refactored HRNet Now Live! π
π₯ Supercharge your computer vision projects with high-resolution HRNet models β fully refactored for easy training/testing!
β Multiple ImageNet-pretrained models
β Lightning-fast setup
β Top-tier accuracy
π Check it out & βοΈ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
@deeplearning_ai
π₯ Supercharge your computer vision projects with high-resolution HRNet models β fully refactored for easy training/testing!
β Multiple ImageNet-pretrained models
β Lightning-fast setup
β Top-tier accuracy
π Check it out & βοΈ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
@deeplearning_ai
β€6π4π4π₯4π€©2
This media is not supported in your browser
VIEW IN TELEGRAM
π 3DGazeNet: Next-Gen Gaze Estimation! - get instant results with just one click.
Discover how to train powerful gaze estimation models using only synthetic data and weak supervisionβno huge real-world datasets needed.
Perfect for AR/VR, HCI, and beyond.
Cutting-edge, open-source, and ready for your next project!
π Try it now: https://github.com/Shohruh72/3DGazeNet
#DeepLearning #GazeEstimation #AI
@deeplearning_ai
Discover how to train powerful gaze estimation models using only synthetic data and weak supervisionβno huge real-world datasets needed.
Perfect for AR/VR, HCI, and beyond.
Cutting-edge, open-source, and ready for your next project!
π Try it now: https://github.com/Shohruh72/3DGazeNet
#DeepLearning #GazeEstimation #AI
@deeplearning_ai
β€17π₯4π3