Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
β Code: https://github.com/Dana-Farber-AIOS/pathml
β Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
β€6π6
This media is not supported in your browser
VIEW IN TELEGRAM
π΄π΄Direct-a-Video: driving Video Generationπ΄π΄
πDirect-a-Video is a text-to-video generation framework that allows users to individually or jointly control the camera movement and/or object motion. Authors: City University of HK, Kuaishou Tech & Tianjin.
ππ’π π‘π₯π’π π‘ππ¬:
β Decoupling camera/object motion in gen-AI
β Allowing users to independently/jointly control
β Novel temporal cross-attention for cam motion
β Training-free spatial cross-attention for objects
β Driving object generation via bounding boxes
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
πChannel: @MachineLearning_Programming
πPaper https://arxiv.org/pdf/2402.03162.pdf
πProject https://direct-a-video.github.io/
πDirect-a-Video is a text-to-video generation framework that allows users to individually or jointly control the camera movement and/or object motion. Authors: City University of HK, Kuaishou Tech & Tianjin.
ππ’π π‘π₯π’π π‘ππ¬:
β Decoupling camera/object motion in gen-AI
β Allowing users to independently/jointly control
β Novel temporal cross-attention for cam motion
β Training-free spatial cross-attention for objects
β Driving object generation via bounding boxes
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
πChannel: @MachineLearning_Programming
πPaper https://arxiv.org/pdf/2402.03162.pdf
πProject https://direct-a-video.github.io/
π9β€1
LeGrad: Layerwise Explainability GRADient method for large ViT transformer architectures
Explore More:
π»DEMO: you may use demo
πRead the Paper: Access Here
π»Source Code: Explore on GitHub
Relevance: #AI #machinelearning #deeplearning #computervision
join our community:
π @MachineLearning_Programming
Explore More:
π»DEMO: you may use demo
πRead the Paper: Access Here
π»Source Code: Explore on GitHub
Relevance: #AI #machinelearning #deeplearning #computervision
join our community:
π @MachineLearning_Programming
π7π₯1
Result.gif
23.1 MB
π Discover LiteHPE: Advanced Head Pose Estimation π
Features:
π οΈ Setup in Minutes:
π Top-Tier Performance:
β Achieve low Mean Absolute Error rates
β Models range from MobileOne_s0 to s4
β Pretrained models ready for download
π π Star us on GitHub for the latest updates: LiteHPE on GitHub.
Boost your project's capabilities with LiteHPE β the forefront of head pose estimation technology!
#AI #MachineLearning #HeadPoseEstimation #Technology #DeepLearning
π Join now: @MachineLearning_Programming
Features:
π οΈ Setup in Minutes:
π Top-Tier Performance:
β Achieve low Mean Absolute Error rates
β Models range from MobileOne_s0 to s4
β Pretrained models ready for download
π π Star us on GitHub for the latest updates: LiteHPE on GitHub.
Boost your project's capabilities with LiteHPE β the forefront of head pose estimation technology!
#AI #MachineLearning #HeadPoseEstimation #Technology #DeepLearning
π Join now: @MachineLearning_Programming
π7π₯4β€2
demo.gif
6.5 MB
π 3DGazeNet: Revolutionizing Gaze Estimation with Weak-Supervision! π
Key Features:
πΉ Advanced Neural Network: Built on the robust U2-Net architecture.
πΉ Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
πΉ Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in
Pretrained Weights:Quick start with our pretrained weights stored in the
π»Source Code: https://github.com/Shohruh72/3DGazeNet
πRead the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
Key Features:
πΉ Advanced Neural Network: Built on the robust U2-Net architecture.
πΉ Comprehensive Utilities: Easy data loading, preprocessing, and augmentation.
πΉ Seamless Integration: Train, test, and visualize with simple commands.
Demo Visualization:Visualize the demo by configuring your video path in
main.py
and showcasing the power of 3DGazeNet.Pretrained Weights:Quick start with our pretrained weights stored in the
weights
folder.π»Source Code: https://github.com/Shohruh72/3DGazeNet
πRead the Paper: Access Here
#3DGazeNet #GazeEstimation #AI #DeepLearning #TechInnovation
Join us in pushing the boundaries of gaze estimation technology with 3DGazeNet!
π4β€2π₯2
This media is not supported in your browser
VIEW IN TELEGRAM
π PIPNet: One-Click Facial Landmark Detection π―
π GitHub: Star β the Repo | π₯ Watch Dem
π₯ Key Features:
β One-Click Inference β Just run & detect!
β High Accuracy (300W dataset) π
β ResNet-powered for robustness
β Supports training, testing & real-time demo π₯
π Run Inference in One Click:
python main.py --demo
π Performance:
πΉ ResNet101 (120 epochs) β 3.17 NME
π Support Open Source! Star β & Share!
π GitHub Repo
#AI #DeepLearning #FacialRecognition #PIPNet #OneClickInference π
π GitHub: Star β the Repo | π₯ Watch Dem
π₯ Key Features:
β One-Click Inference β Just run & detect!
β High Accuracy (300W dataset) π
β ResNet-powered for robustness
β Supports training, testing & real-time demo π₯
π Run Inference in One Click:
python main.py --demo
π Performance:
πΉ ResNet101 (120 epochs) β 3.17 NME
π Support Open Source! Star β & Share!
π GitHub Repo
#AI #DeepLearning #FacialRecognition #PIPNet #OneClickInference π
β€4π4π€©2π₯1
π Refactored HRNet Now Live! π
π₯ Supercharge your computer vision projects with high-resolution HRNet models β fully refactored for easy training/testing!
β Multiple ImageNet-pretrained models
β Lightning-fast setup
β Top-tier accuracy
π Check it out & βοΈ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
β @MachineLearning_Programming
π₯ Supercharge your computer vision projects with high-resolution HRNet models β fully refactored for easy training/testing!
β Multiple ImageNet-pretrained models
β Lightning-fast setup
β Top-tier accuracy
π Check it out & βοΈ Star the repo if you find it useful!
GitHub: Shohruh72/HRNet
#AI #DeepLearning #OpenSource
β @MachineLearning_Programming
β€4π₯2π€©1
demo2.gif
14.3 MB
π¬ Live Portraits AI: Bring Photos to Life!
Transform any static portrait into a vivid animationβjust one click.
β¨ One-click animation
β¨ Realistic facial movements
β¨ Supports image & video input
π [Try it now!]
#AI #PortraitAnimation #DeepLearning
@MachineLearning_Programming
Transform any static portrait into a vivid animationβjust one click.
β¨ One-click animation
β¨ Realistic facial movements
β¨ Supports image & video input
π [Try it now!]
#AI #PortraitAnimation #DeepLearning
@MachineLearning_Programming
π₯4β€1π€©1