Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
This media is not supported in your browser
VIEW IN TELEGRAM
🌴🌴Direct-a-Video: driving Video Generation🌴🌴
👉Direct-a-Video is a text-to-video generation framework that allows users to individually or jointly control the camera movement and/or object motion. Authors: City University of HK, Kuaishou Tech & Tianjin.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Decoupling camera/object motion in gen-AI
✅Allowing users to independently/jointly control
✅Novel temporal cross-attention for cam motion
✅Training-free spatial cross-attention for objects
✅Driving object generation via bounding boxes
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @MachineLearning_Programming
👉Paper https://arxiv.org/pdf/2402.03162.pdf
👉Project https://direct-a-video.github.io/
👉Direct-a-Video is a text-to-video generation framework that allows users to individually or jointly control the camera movement and/or object motion. Authors: City University of HK, Kuaishou Tech & Tianjin.
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Decoupling camera/object motion in gen-AI
✅Allowing users to independently/jointly control
✅Novel temporal cross-attention for cam motion
✅Training-free spatial cross-attention for objects
✅Driving object generation via bounding boxes
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @MachineLearning_Programming
👉Paper https://arxiv.org/pdf/2402.03162.pdf
👉Project https://direct-a-video.github.io/
LeGrad: Layerwise Explainability GRADient method for large ViT transformer architectures
Explore More:
💻DEMO: you may use demo
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #AI #machinelearning #deeplearning #computervision
join our community:
👉 @MachineLearning_Programming
Explore More:
💻DEMO: you may use demo
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #AI #machinelearning #deeplearning #computervision
join our community:
👉 @MachineLearning_Programming
Result.gif
23.1 MB
🚀 Discover LiteHPE: Advanced Head Pose Estimation 🚀
Features:
🛠️ Setup in Minutes:
📈 Top-Tier Performance:
✅ Achieve low Mean Absolute Error rates
✅ Models range from MobileOne_s0 to s4
✅ Pretrained models ready for download
🌟 🌟 Star us on GitHub for the latest updates: LiteHPE on GitHub.
Boost your project's capabilities with LiteHPE – the forefront of head pose estimation technology!
#AI #MachineLearning #HeadPoseEstimation #Technology #DeepLearning
🔗 Join now: @MachineLearning_Programming
Features:
🛠️ Setup in Minutes:
📈 Top-Tier Performance:
✅ Achieve low Mean Absolute Error rates
✅ Models range from MobileOne_s0 to s4
✅ Pretrained models ready for download
🌟 🌟 Star us on GitHub for the latest updates: LiteHPE on GitHub.
Boost your project's capabilities with LiteHPE – the forefront of head pose estimation technology!
#AI #MachineLearning #HeadPoseEstimation #Technology #DeepLearning
🔗 Join now: @MachineLearning_Programming
demo.gif
6.5 MB
🚀 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! 🌟
Key Features:
🔹 Advanced Neural Network: Built on the robust U2-Net architecture.
🔹 Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
🔹 Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in
Pretrained Weights:Quick start with our pretrained weights stored in the
💻Source Code: https://github.com/Shohruh72/3DGazeNet
📖Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
Key Features:
🔹 Advanced Neural Network: Built on the robust U2-Net architecture.
🔹 Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
🔹 Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in
main.py
and showcasing the power of 3DGazeNet.Pretrained Weights:Quick start with our pretrained weights stored in the
weights
folder.💻Source Code: https://github.com/Shohruh72/3DGazeNet
📖Read the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
This media is not supported in your browser
VIEW IN TELEGRAM
🚀 PIPNet: One-Click Facial Landmark Detection 🎯
🔗 GitHub: Star ⭐ the Repo | 🎥 Watch Dem
🔥 Key Features:
✅ One-Click Inference – Just run & detect!
✅ High Accuracy (300W dataset) 📊
✅ ResNet-powered for robustness
✅ Supports training, testing & real-time demo 🎥
📌 Run Inference in One Click:
python main.py --demo
📊 Performance:
🔹 ResNet101 (120 epochs) → 3.17 NME
🌟 Support Open Source! Star ⭐ & Share!
🔗 GitHub Repo
#AI #DeepLearning #FacialRecognition #PIPNet #OneClickInference 🚀
🔗 GitHub: Star ⭐ the Repo | 🎥 Watch Dem
🔥 Key Features:
✅ One-Click Inference – Just run & detect!
✅ High Accuracy (300W dataset) 📊
✅ ResNet-powered for robustness
✅ Supports training, testing & real-time demo 🎥
📌 Run Inference in One Click:
python main.py --demo
📊 Performance:
🔹 ResNet101 (120 epochs) → 3.17 NME
🌟 Support Open Source! Star ⭐ & Share!
🔗 GitHub Repo
#AI #DeepLearning #FacialRecognition #PIPNet #OneClickInference 🚀