πReplaceAnything: demo is out!π
πReplaceAnything: ultra-high quality content replacement. The ultimate #AI solution for human, clothing & background replacement to change the e-commerce experience for vendors.
πReview https://t.ly/FMyvf
πProject https://lnkd.in/dcyZvP2b
πModelScope https://lnkd.in/dU4x4nE6
πHugging Face https://lnkd.in/dn3uXWgd
πEmpty report https://lnkd.in/dcuGXd6c
πPaper coming?
πReplaceAnything: ultra-high quality content replacement. The ultimate #AI solution for human, clothing & background replacement to change the e-commerce experience for vendors.
πReview https://t.ly/FMyvf
πProject https://lnkd.in/dcyZvP2b
πModelScope https://lnkd.in/dU4x4nE6
πHugging Face https://lnkd.in/dn3uXWgd
πEmpty report https://lnkd.in/dcuGXd6c
πPaper coming?
β€11π3π2π1
This media is not supported in your browser
VIEW IN TELEGRAM
π₯ Transparent Object Tracking π₯
πTrans2k: transparent object tracking dataset of 2,000+ sequences with 100,000+ images, annotated by bounding boxes & segmentation mask.
πReview https://t.ly/mEI6O
πPaper https://lnkd.in/dsudY3DB
πProject https://lnkd.in/d48SSJJ3
πTOB https://lnkd.in/dykBUNfC
πTrans2k: transparent object tracking dataset of 2,000+ sequences with 100,000+ images, annotated by bounding boxes & segmentation mask.
πReview https://t.ly/mEI6O
πPaper https://lnkd.in/dsudY3DB
πProject https://lnkd.in/d48SSJJ3
πTOB https://lnkd.in/dykBUNfC
π₯18π€―7β€3π2π±2π1
ππ AGNOSTIC Object Counting ππ
πPseCo: combining SAM to segment all possible objects as mask proposals & CLIP to classify proposals to obtain accurate object counts. The new SOTA in both few-shot/zero-shot object counting/detection.
πReview https://t.ly/e4iza
πPaper https://lnkd.in/dbzMXKWG
πRepo https://lnkd.in/db9Q9Pse
πPseCo: combining SAM to segment all possible objects as mask proposals & CLIP to classify proposals to obtain accurate object counts. The new SOTA in both few-shot/zero-shot object counting/detection.
πReview https://t.ly/e4iza
πPaper https://lnkd.in/dbzMXKWG
πRepo https://lnkd.in/db9Q9Pse
π₯17π5π₯°1π1
π₯ Announcing #Py4Ai Conferenceπ₯
π Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
ππ‘π ππ’π«π¬π πππππ‘ π¨π π¬π©πππ€ππ«π¬:
πMerve Noyan | #HuggingFace π€
πGabriele Lombardi | ARGO Vision
πAmanda Cercas Curry | Uni. Bocconi
πPiero Savastano | Cheshire Cat AI
πFrancesco Zuppichini | Zurich Insurance
πAndrea Palladino, PhD | Sr. Data Scientist
π More: https://www.linkedin.com/posts/visionarynet_py4ai-py4ai-python-activity-7152928716988243968-pOUn?utm_source=share&utm_medium=member_desktop
π Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
ππ‘π ππ’π«π¬π πππππ‘ π¨π π¬π©πππ€ππ«π¬:
πMerve Noyan | #HuggingFace π€
πGabriele Lombardi | ARGO Vision
πAmanda Cercas Curry | Uni. Bocconi
πPiero Savastano | Cheshire Cat AI
πFrancesco Zuppichini | Zurich Insurance
πAndrea Palladino, PhD | Sr. Data Scientist
π More: https://www.linkedin.com/posts/visionarynet_py4ai-py4ai-python-activity-7152928716988243968-pOUn?utm_source=share&utm_medium=member_desktop
Linkedin
π₯BOOOM! | Alessandro Ferrari
π₯BOOOM! Announcing #Py4AI Conferenceπ₯
π Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
ππ―ππ§π πππππ’π₯π¬:
β 16th March 2024β¦
π Super proud to unveil #Py4AI, the newest conference dedicated to exploring the depths of Python & AI. Py4AI is a 1-day free event for Python and Artificial Intelligence developers.
ππ―ππ§π πππππ’π₯π¬:
β 16th March 2024β¦
π10π2β€1π₯°1π€―1
This media is not supported in your browser
VIEW IN TELEGRAM
πTimeline Text-Driven Humansπ
πNovel challenge: timeline control for text-driven motion synthesis of 3D Humans.
πReview https://t.ly/HLm-N
πPaper https://lnkd.in/esaR_M_9
πProject https://lnkd.in/epCZDvFW
πRepo coming
πNovel challenge: timeline control for text-driven motion synthesis of 3D Humans.
πReview https://t.ly/HLm-N
πPaper https://lnkd.in/esaR_M_9
πProject https://lnkd.in/epCZDvFW
πRepo coming
π₯13β€6π4π3π€©1
AI with Papers - Artificial Intelligence & Deep Learning
π²οΈ Amodal Tracking Any Object π²οΈ πAmodal tracking": inferring complete object boundaries, even when certain portions are occluded. New benchmark & approach, 2x better than SOTA in people tracking π₯ πReview https://t.ly/Rc6Ku πPaper https://lnkd.in/d39rFYT4β¦
π₯π₯ Code is out π₯π₯
Check the comments for the links ;)
Check the comments for the links ;)
π« AlphaGeometry: Olympiad-level AI π«
π Theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by
synthesizing millions of theorems and proofs across different levels of complexity π€―
πReview https://t.ly/2-Z7C
πPaper https://lnkd.in/g3QkqwCE
πBlog https://lnkd.in/ge-mpM7q
πRepo https://lnkd.in/gHjwks_9
π Theorem prover for Euclidean plane geometry that sidesteps the need for human demonstrations by
synthesizing millions of theorems and proofs across different levels of complexity π€―
πReview https://t.ly/2-Z7C
πPaper https://lnkd.in/g3QkqwCE
πBlog https://lnkd.in/ge-mpM7q
πRepo https://lnkd.in/gHjwks_9
π€―20π3π₯°2π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
π¦ XINC: Pixels to Neurons π¦
πeXplaining the Implicit Neural Canvas (XINC) from the University of Maryland, is a unified framework for explaining properties of INRs by examining the strength of each neuronβs contribution to each output pixel
πReview https://t.ly/wwAmz
πPaper arxiv.org/pdf/2401.10217.pdf
πProject namithap10.github.io/xinc
πRepo github.com/namithap10/xinc
πeXplaining the Implicit Neural Canvas (XINC) from the University of Maryland, is a unified framework for explaining properties of INRs by examining the strength of each neuronβs contribution to each output pixel
πReview https://t.ly/wwAmz
πPaper arxiv.org/pdf/2401.10217.pdf
πProject namithap10.github.io/xinc
πRepo github.com/namithap10/xinc
π€―9π3π2π₯1
π½ One Model <-> All Segmentations π½
π 10+ different segmentation tasks in one framework, including image-level, video-level, interactive segmentation, & open-vocabulary segmentation. All in one!
πReview https://t.ly/fywVz
πPaper https://lnkd.in/dw3S4B74
πProject https://lnkd.in/dzHT9v45
πRepo https://lnkd.in/d6fDCnSp
π 10+ different segmentation tasks in one framework, including image-level, video-level, interactive segmentation, & open-vocabulary segmentation. All in one!
πReview https://t.ly/fywVz
πPaper https://lnkd.in/dw3S4B74
πProject https://lnkd.in/dzHT9v45
πRepo https://lnkd.in/d6fDCnSp
π₯17π5β€2π₯°1πΎ1
This media is not supported in your browser
VIEW IN TELEGRAM
π» GARField: Group Anything π»
π GARField is a novel approach for decomposing #3D scenes into a hierarchy of semantically meaningful groups from posed image inputs.
πReview https://t.ly/6Hkeq
πPaper https://lnkd.in/d28mfRcZ
πProject https://lnkd.in/dzYdRNKy
πRepo (coming) https://lnkd.in/d2VeRJCS
π GARField is a novel approach for decomposing #3D scenes into a hierarchy of semantically meaningful groups from posed image inputs.
πReview https://t.ly/6Hkeq
πPaper https://lnkd.in/d28mfRcZ
πProject https://lnkd.in/dzYdRNKy
πRepo (coming) https://lnkd.in/d2VeRJCS
π8β€3π₯°1π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
π₯ Depth Anything: new SOTA π₯
πDepth Anything: the new SOTA in monocular depth estimation (MDE), trained with 1.5M labeled images and 62M+ unlabeled images jointly. It's the new SOTA!
πReview https://t.ly/tCBwO
πPaper https://lnkd.in/djx-9k2J
πProject https://lnkd.in/dYetqZFa
πRepo https://lnkd.in/d87CrUGv
πDemoπ€ https://lnkd.in/dJhvKBep
πDepth Anything: the new SOTA in monocular depth estimation (MDE), trained with 1.5M labeled images and 62M+ unlabeled images jointly. It's the new SOTA!
πReview https://t.ly/tCBwO
πPaper https://lnkd.in/djx-9k2J
πProject https://lnkd.in/dYetqZFa
πRepo https://lnkd.in/d87CrUGv
πDemoπ€ https://lnkd.in/dJhvKBep
π₯17β€3π₯°2π€©2
This media is not supported in your browser
VIEW IN TELEGRAM
π ULTRA-Realistic Avatar π
πNovel 3D avatar with enhanced fidelity of geometry, and superior quality of physically based rendering (PBR) textures without unwanted lighting.
πReview https://t.ly/B3BEu
πProject https://lnkd.in/dkUQHFEV
πPaper https://lnkd.in/dtEQxrBu
πCode coming π©·
πNovel 3D avatar with enhanced fidelity of geometry, and superior quality of physically based rendering (PBR) textures without unwanted lighting.
πReview https://t.ly/B3BEu
πProject https://lnkd.in/dkUQHFEV
πPaper https://lnkd.in/dtEQxrBu
πCode coming π©·
π©17β€5π2π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
π₯Lumiere: SOTA video-genπ₯
π#Google unveils Lumiere: Space-Time Diffusion Model for Realistic Video Generation. It's the new SOTA, tasks: Text-to-Video, Video Stylization, Cinemagraphs & Video Inpainting.
πReview https://t.ly/nalJR
πPaper https://lnkd.in/d-PvrGjT
πProject https://t.ly/gK8hz
π#Google unveils Lumiere: Space-Time Diffusion Model for Realistic Video Generation. It's the new SOTA, tasks: Text-to-Video, Video Stylization, Cinemagraphs & Video Inpainting.
πReview https://t.ly/nalJR
πPaper https://lnkd.in/d-PvrGjT
πProject https://t.ly/gK8hz
π₯18β€4π3π2π€©2π₯°1π€―1π©1
This media is not supported in your browser
VIEW IN TELEGRAM
π§ͺ SUPIR: SOTA restoration π§ͺ
πSUPIR is the new SOTA in image restoration; suitable for restoration of blurry objects, defining the material texture of objects, and adjusting restoration based on high-level semantics
πReview https://t.ly/wgObH
πProject https://supir.xpixel.group/
πPaper https://lnkd.in/dZPYcUuq
πDemo coming π©· but no code announced :(
πSUPIR is the new SOTA in image restoration; suitable for restoration of blurry objects, defining the material texture of objects, and adjusting restoration based on high-level semantics
πReview https://t.ly/wgObH
πProject https://supir.xpixel.group/
πPaper https://lnkd.in/dZPYcUuq
πDemo coming π©· but no code announced :(
β€8π₯4π₯°1πΎ1
This media is not supported in your browser
VIEW IN TELEGRAM
π«§ SAM + Open Models π«§
πGrounded SAM (w/ DINO) as an open-set detector to combine with SAM. It can seamlessly integrate with other Open-World models to accomplish more intricate visual tasks.
πReview https://t.ly/FwasQ
πPaper arxiv.org/pdf/2401.14159.pdf
πCode github.com/IDEA-Research/Grounded-Segment-Anything
πGrounded SAM (w/ DINO) as an open-set detector to combine with SAM. It can seamlessly integrate with other Open-World models to accomplish more intricate visual tasks.
πReview https://t.ly/FwasQ
πPaper arxiv.org/pdf/2401.14159.pdf
πCode github.com/IDEA-Research/Grounded-Segment-Anything
π₯9π2π1πΎ1
This media is not supported in your browser
VIEW IN TELEGRAM
π’"Virtual Try-All" by #Amazon π’
π#Amazon announces βDiffuse to Chooseβ: diffusion-based image-conditioned inpainting for VTON. Virtually place any e-commerce item in any setting.
πReview https://t.ly/at07Y
πPaper https://lnkd.in/dxR7nGtd
πProject diffuse2choose.github.io/
π#Amazon announces βDiffuse to Chooseβ: diffusion-based image-conditioned inpainting for VTON. Virtually place any e-commerce item in any setting.
πReview https://t.ly/at07Y
πPaper https://lnkd.in/dxR7nGtd
πProject diffuse2choose.github.io/
β€15π7π€―4π₯1π₯°1
This media is not supported in your browser
VIEW IN TELEGRAM
𦩠WildRGB-D: Objects in the Wild π¦©
π#NVIDIA unveils a novel RGB-D object dataset captured in the wild: ~8500 recorded objects, ~20,000 RGBD videos, 46 categories with corresponding masks and 3D point clouds.
πReview https://t.ly/WCqVz
πData github.com/wildrgbd/wildrgbd
πPaper arxiv.org/pdf/2401.12592.pdf
πProject wildrgbd.github.io/
π#NVIDIA unveils a novel RGB-D object dataset captured in the wild: ~8500 recorded objects, ~20,000 RGBD videos, 46 categories with corresponding masks and 3D point clouds.
πReview https://t.ly/WCqVz
πData github.com/wildrgbd/wildrgbd
πPaper arxiv.org/pdf/2401.12592.pdf
πProject wildrgbd.github.io/
π9β€3π₯2π1π€©1π1
This media is not supported in your browser
VIEW IN TELEGRAM
πEasyVolcap: Accelerating Neural Volumetricπ
πNovel #PyTorch library for accelerating neural video:volumetric video capturing, reconstruction & rendering
πReview https://t.ly/8BISl
πPaper arxiv.org/pdf/2312.06575.pdf
πCode github.com/zju3dv/EasyVolcap
πNovel #PyTorch library for accelerating neural video:volumetric video capturing, reconstruction & rendering
πReview https://t.ly/8BISl
πPaper arxiv.org/pdf/2312.06575.pdf
πCode github.com/zju3dv/EasyVolcap
π₯10π2β€1π₯°1π1π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
π Rock-Track announced! π
πRock-Track: the evolution of Poly-MOT, the previous SOTA in 3D MOT Tracking-By-Detection framework.
πReview https://t.ly/hC0ak
πRepo, coming: https://lnkd.in/dtDkPwCC
πPaper coming
πRock-Track: the evolution of Poly-MOT, the previous SOTA in 3D MOT Tracking-By-Detection framework.
πReview https://t.ly/hC0ak
πRepo, coming: https://lnkd.in/dtDkPwCC
πPaper coming
π4π4π₯2β€1π₯°1
π§ 350+ Free #AI Courses by #Googleπ§
π350+ free courses from #Google to become professional in #AI & #Cloud. The full catalog (900+) includes a variety of activity: videos, documents, labs, coding, and quizzes. 15+ supported languages. No excuse.
β πππ§ππ«πππ’π―π ππ
β ππ§ππ«π¨ ππ¨ ππππ¬
β ππ π°π’ππ‘ ππ
β ππππ, ππ, ππ
β πππ¬π©π¨π§π¬π’ππ₯π ππ
πReview: https://t.ly/517Dr
πFull list: https://www.cloudskillsboost.google/catalog?page=1
π350+ free courses from #Google to become professional in #AI & #Cloud. The full catalog (900+) includes a variety of activity: videos, documents, labs, coding, and quizzes. 15+ supported languages. No excuse.
β πππ§ππ«πππ’π―π ππ
β ππ§ππ«π¨ ππ¨ ππππ¬
β ππ π°π’ππ‘ ππ
β ππππ, ππ, ππ
β πππ¬π©π¨π§π¬π’ππ₯π ππ
πReview: https://t.ly/517Dr
πFull list: https://www.cloudskillsboost.google/catalog?page=1
β€13π3π2πΎ2π₯1