Artificial Intelligence && Deep Learning
59.4K subscribers
170 photos
22 videos
59 files
750 links
Channel for who have a passion for -
* Artificial Intelligence
* Machine Learning
* Deep Learning
* Data Science
* Computer vision
* Image Processing
* Research Papers

With advertising offers contact: @ai_adminn
Download Telegram
Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.

⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
πŸ‘Ž53πŸ‘34πŸ”₯10😁2
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ†”πŸ†” Magic-Me: Identity-Specific Video πŸ†”πŸ†”

πŸ‘‰hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 πŸ’™

𝐇𝐒𝐠𝐑π₯𝐒𝐠𝐑𝐭𝐬:
βœ…Novel Video Custom Diffusion (VCD) framework
βœ…High-Quality ID-specific videos generation
βœ…Improvement in aligning IDs-images and text
βœ…Robust 3D Gaussian Noise Prior for denoising
βœ…Better Inter-frame correlation / video consistency
βœ…New modules F-VCD/T-VCD for videos upscale
βœ…New train with masked loss by prompt-to-segmentation

hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse

πŸ‘‰Channel: @deeplearning_ai
πŸ‘‰Paper https://arxiv.org/pdf/2402.09368.pdf
πŸ‘‰Project https://magic-me-webpage.github.io/
πŸ‘‰Code https://github.com/Zhen-Dong/Magic-Me
πŸ‘23❀5
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing ECoDepth: The New Benchmark in Diffusive Mono-Depth

From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.

Key Features:

βœ…Revolutionary MDE approach tailored for SIDE tasks
βœ…Enhanced semantic context via ViT embeddings
βœ…Superior performance in zero-shot transfer tasks
βœ…Surpasses previous SOTA models by up to 14%

Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.

πŸ“– Read the Paper
πŸ’» Get the Code

#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse

join our community:
πŸ‘‰ @deeplearning_ai
πŸ‘16❀2
Media is too big
VIEW IN TELEGRAM
Neural Bodies with Clothes: Overview

Introduction:
Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.

Key Features:

βœ…Novel approach for modeling clothed human figures.
βœ…Unified framework accommodating various clothing types.
βœ…Consistent representation of both body and clothing.
βœ…Enables seamless modification of identity, shape, clothing, and pose.
βœ…Extensive dataset with detailed clothing information.

Explore More:
πŸ’»Project Details: Discover More
πŸ“–Read the Paper: Access Here
πŸ’»Source Code: Explore on GitHub

Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision

join our community:
πŸ‘‰ @deeplearning_ai
πŸ‘12πŸ”₯7❀6
demo.gif
15 MB
πŸš€ Explore SCRFD: High-Efficiency, High-Accuracy Face Detection πŸš€


Unlock next-level face detection capabilities with SCRFD – efficiency and accuracy in one solution!


πŸ“ˆ Performance at a Glance:

βœ…
Model range: SCRFD_500M to SCRFD_34G
βœ…Accuracy up to 96.06%
βœ…Inference as fast as 3.6 ms

πŸ” Explore more and consider starring our repo for updates:
--- GitHub Repository.
--- Paper



#AI #MachineLearning #FaceDetection #TechInnovation #DeepLearning

βœ…
https://t.me/deeplearning_ai
πŸ‘16❀5πŸ”₯1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ” Discover the Power of Fine-Grained Gaze Estimation with L2CS-Net! 🌟

πŸš€ Key Features:
βœ…Advanced Architecture: Built using state-of-the-art neural network structures.
βœ…Versatile Utilities: Packed with utility functions and classes for seamless integration.
βœ…Robust Data Handling: Efficient data loading, preprocessing, and augmentation.
βœ…Comprehensive Training & Testing: Easy-to-follow scripts for training and testing your models.

πŸ‘€ Live Demo:
Visualize the power of L2CS-Net with your own video:


🌟 Join Us:
Star our repo on GitHub and be part of the innovative community pushing the boundaries of gaze estimation. Your support drives us forward!

πŸ”— GitHub Repository

Let's advance gaze estimation together! πŸš€πŸŒ #GazeEstimation #DeepLearning #AI #MachineLearning #ComputerVision
πŸ‘14❀5🀩1
🌟 Exciting AI Breakthrough! Meet U^2-Net! 🌟

🌟 Why U^2-Net?
*
Efficient
*
AdvancedArchitecture.
*
High-Resolution Outputs
πŸš€ Key Applications:
*
Salient Object Detection
*
Background Removal
*
Medical Imaging

πŸ’‘ Ready to Transform Projects

✨ Give our repo a ⭐ and show your support!

#AI #DeepLearning #U2Net #ImageSegmentation #OpenSource #GitHub

πŸ’»Source Code: Explore GitHub Repo

Happy Learning! 🌟
πŸ”₯11πŸ‘8❀1
πŸš€ 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! 🌟

Key Features:
πŸ”Ή Advanced Neural Network: Built on the robust U2-Net architecture.
πŸ”Ή Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
πŸ”Ή Seamless Integration: Train, test, and visualize with simple commands.

Demo Visualization:Visualize the demo by configuring your video path in main.py and showcasing the power of 3DGazeNet.

Pretrained Weights:Quick start with our pretrained weights stored in the weights folder.

πŸ’»Source Code: https://github.com/Shohruh72/3DGazeNet
πŸ“–Read the Paper: Access Here


#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation

Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
πŸ‘7🀩3❀2πŸ”₯2
This media is not supported in your browser
VIEW IN TELEGRAM
πŸš€ Introducing L2CS-Net: Fine-Grained Gaze Estimation πŸ‘€βœ¨

πŸ”— GitHub Repo: Star ⭐ the Repo

πŸ”₯ Key Features:
βœ… Fine-grained gaze estimation with deep learning
βœ… Supports Gaze360 dataset
βœ… Train with Single-GPU / Multi-GPU
βœ… Demo for real-time visualization

πŸ“Œ Quick Start:
πŸ—‚οΈ Prepare dataset
πŸ‹οΈ Train (python main.py --train)
πŸŽ₯ Video Infernece (python main.py --demo)

🌟 Support Open Source! Star ⭐ & Share!
πŸ”— GitHub Repo: L2CSNet

#AI #DeepLearning #GazeEstimation #L2CSNet #OpenSource πŸš€
πŸ‘17❀4πŸ”₯2🀩1
πŸš€ Refactored HRNet Now Live! πŸš€

πŸ”₯ Supercharge your computer vision projects with high-resolution HRNet models – fully refactored for easy training/testing!

βœ… Multiple ImageNet-pretrained models
βœ… Lightning-fast setup
βœ… Top-tier accuracy

πŸ‘‰ Check it out & ⭐️ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet

#AI #DeepLearning #OpenSource

@deeplearning_ai
❀6πŸ‘4πŸ‘Ž4πŸ”₯4🀩2
This media is not supported in your browser
VIEW IN TELEGRAM
πŸš€ 3DGazeNet: Next-Gen Gaze Estimation! - get instant results with just one click.

Discover how to train powerful gaze estimation models using only synthetic data and weak supervisionβ€”no huge real-world datasets needed.
Perfect for AR/VR, HCI, and beyond.
Cutting-edge, open-source, and ready for your next project!

πŸ‘‰ Try it now: https://github.com/Shohruh72/3DGazeNet

#DeepLearning #GazeEstimation #AI

@deeplearning_ai
❀17πŸ”₯4πŸ‘3