Want to jump ahead in artificial intelligence and/or digital pathology? Excited to share that after 2+ years of development PathML 2.0 is out! An open source #computational #pathology software library created by Dana-Farber Cancer Institute/Harvard Medical School and Weill Cornell Medicine led by Massimo Loda to lower the barrier to entry to #digitalpathology and #artificialintelligence , and streamline all #imageanalysis or #deeplearning workflows.
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
⭐ Code: https://github.com/Dana-Farber-AIOS/pathml
GitHub
GitHub - Dana-Farber-AIOS/pathml: Tools for computational pathology
Tools for computational pathology. Contribute to Dana-Farber-AIOS/pathml development by creating an account on GitHub.
This media is not supported in your browser
VIEW IN TELEGRAM
🆔🆔 Magic-Me: Identity-Specific Video 🆔🆔
👉hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 💙
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel Video Custom Diffusion (VCD) framework
✅High-Quality ID-specific videos generation
✅Improvement in aligning IDs-images and text
✅Robust 3D Gaussian Noise Prior for denoising
✅Better Inter-frame correlation / video consistency
✅New modules F-VCD/T-VCD for videos upscale
✅New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @deeplearning_ai
👉Paper https://arxiv.org/pdf/2402.09368.pdf
👉Project https://magic-me-webpage.github.io/
👉Code https://github.com/Zhen-Dong/Magic-Me
👉hashtag#ByteDance (+UC Berkeley) unveils VCD for video-gen: with just a few images of a specific identity it can generate temporal consistent videos aligned with the given prompt. Impressive results, source code under Apache 2.0 💙
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel Video Custom Diffusion (VCD) framework
✅High-Quality ID-specific videos generation
✅Improvement in aligning IDs-images and text
✅Robust 3D Gaussian Noise Prior for denoising
✅Better Inter-frame correlation / video consistency
✅New modules F-VCD/T-VCD for videos upscale
✅New train with masked loss by prompt-to-segmentation
hashtag#artificialintelligence hashtag#machinelearning hashtag#ml hashtag#AI hashtag#deeplearning hashtag#computervision hashtag#AIwithPapers hashtag#metaverse
👉Channel: @deeplearning_ai
👉Paper https://arxiv.org/pdf/2402.09368.pdf
👉Project https://magic-me-webpage.github.io/
👉Code https://github.com/Zhen-Dong/Magic-Me
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing ECoDepth: The New Benchmark in Diffusive Mono-Depth
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
✅Revolutionary MDE approach tailored for SIDE tasks
✅Enhanced semantic context via ViT embeddings
✅Superior performance in zero-shot transfer tasks
✅Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
📖 Read the Paper
💻 Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
👉 @deeplearning_ai
From the labs of IITD, we unveil ECoDepth - our groundbreaking SIDE model powered by a diffusion backbone and enriched with ViT embeddings. This innovation sets a new standard in single image depth estimation (SIDE), offering unprecedented accuracy and semantic understanding.
Key Features:
✅Revolutionary MDE approach tailored for SIDE tasks
✅Enhanced semantic context via ViT embeddings
✅Superior performance in zero-shot transfer tasks
✅Surpasses previous SOTA models by up to 14%
Dive into the future of depth estimation with ECoDepth. Access our source code and explore the full potential of our model.
📖 Read the Paper
💻 Get the Code
#ArtificialIntelligence #MachineLearning #DeepLearning #ComputerVision #AIwithPapers #Metaverse
join our community:
👉 @deeplearning_ai
Media is too big
VIEW IN TELEGRAM
Neural Bodies with Clothes: Overview
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
✅Novel approach for modeling clothed human figures.
✅Unified framework accommodating various clothing types.
✅Consistent representation of both body and clothing.
✅Enables seamless modification of identity, shape, clothing, and pose.
✅Extensive dataset with detailed clothing information.
Explore More:
💻Project Details: Discover More
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
👉 @deeplearning_ai
Introduction: Neural-ABC, a cutting-edge parametric model developed by the University of Science & Technology of China, innovatively represents clothed human bodies.
Key Features:
✅Novel approach for modeling clothed human figures.
✅Unified framework accommodating various clothing types.
✅Consistent representation of both body and clothing.
✅Enables seamless modification of identity, shape, clothing, and pose.
✅Extensive dataset with detailed clothing information.
Explore More:
💻Project Details: Discover More
📖Read the Paper: Access Here
💻Source Code: Explore on GitHub
Relevance: #artificialintelligence #machinelearning #AI #deeplearning #computervision
join our community:
👉 @deeplearning_ai