This media is not supported in your browser
VIEW IN TELEGRAM
π·οΈ Face Foundation Model π·οΈ
πArc2Face, the first foundation model for human faces. Source Code released π
πReview https://t.ly/MfAFI
πPaper https://lnkd.in/dViE_tCd
πProject https://lnkd.in/d4MHdEZK
πCode https://lnkd.in/dv9ZtDfA
πArc2Face, the first foundation model for human faces. Source Code released π
πReview https://t.ly/MfAFI
πPaper https://lnkd.in/dViE_tCd
πProject https://lnkd.in/d4MHdEZK
πCode https://lnkd.in/dv9ZtDfA
β€12π3π1π€©1
πͺΌFaceXFormer: Unified Face-TransformerπͺΌ
πFaceXFormer, the first unified transformer for facial analysis: face parsing, landmark detection, head pose, attributes recognition, age, gender, race, and landmarks.
πReview https://t.ly/MfAFI
πPaper https://arxiv.org/pdf/2403.12960.pdf
πProject kartik-3004.github.io/facexformer_web/
πCode github.com/Kartik-3004/facexformer
πFaceXFormer, the first unified transformer for facial analysis: face parsing, landmark detection, head pose, attributes recognition, age, gender, race, and landmarks.
πReview https://t.ly/MfAFI
πPaper https://arxiv.org/pdf/2403.12960.pdf
πProject kartik-3004.github.io/facexformer_web/
πCode github.com/Kartik-3004/facexformer
π11β€4π₯°2π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
π¦ DINO-based Video Tracking π¦
πThe Weizmann Institute announced the new SOTA in point-tracking via pre-trained DINO features. Source code announced (not yet released)π
πReview https://t.ly/_GIMT
πPaper https://lnkd.in/dsGVDcar
πProject dino-tracker.github.io/
πCode https://github.com/AssafSinger94/dino-tracker
πThe Weizmann Institute announced the new SOTA in point-tracking via pre-trained DINO features. Source code announced (not yet released)π
πReview https://t.ly/_GIMT
πPaper https://lnkd.in/dsGVDcar
πProject dino-tracker.github.io/
πCode https://github.com/AssafSinger94/dino-tracker
π₯18β€3π€―2π1π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
π¦ T-Rex 2: a new SOTA is out! π¦
πA novel (VERY STRONG) open-set object detector model. Strong zero-shot capabilities, suitable for various scenarios with only one suit of weights. Demo and Source Code releasedπ
πReview https://t.ly/fYw8D
πPaper https://lnkd.in/dpmRh2zh
πProject https://lnkd.in/dnR_jPcR
πCode https://lnkd.in/dnZnGRUn
πDemo https://lnkd.in/drDUEDYh
πA novel (VERY STRONG) open-set object detector model. Strong zero-shot capabilities, suitable for various scenarios with only one suit of weights. Demo and Source Code releasedπ
πReview https://t.ly/fYw8D
πPaper https://lnkd.in/dpmRh2zh
πProject https://lnkd.in/dnR_jPcR
πCode https://lnkd.in/dnZnGRUn
πDemo https://lnkd.in/drDUEDYh
π₯23π3π€―2β€1π€©1
This media is not supported in your browser
VIEW IN TELEGRAM
πTinyBeauty: 460 FPS Make-upπ
πTinyBeauty: only 80K parameters to achieve the SOTA in virtual makeup without intricate face prompts. Up to 460 FPS on mobile!
πReview https://t.ly/LG5ok
πPaper https://arxiv.org/pdf/2403.15033.pdf
πProject https://tinybeauty.github.io/TinyBeauty/
πTinyBeauty: only 80K parameters to achieve the SOTA in virtual makeup without intricate face prompts. Up to 460 FPS on mobile!
πReview https://t.ly/LG5ok
πPaper https://arxiv.org/pdf/2403.15033.pdf
πProject https://tinybeauty.github.io/TinyBeauty/
π7π€―4π2β‘1π₯1π©1
This media is not supported in your browser
VIEW IN TELEGRAM
β AiOS: All-in-One-Stage Humans β
πAll-in-one-stage framework for SOTA multiple expressive pose and shape recovery without additional human detection step.
πReview https://t.ly/ekNd4
πPaper https://arxiv.org/pdf/2403.17934.pdf
πProject https://ttxskk.github.io/AiOS/
πCode/Demo (announced)
πAll-in-one-stage framework for SOTA multiple expressive pose and shape recovery without additional human detection step.
πReview https://t.ly/ekNd4
πPaper https://arxiv.org/pdf/2403.17934.pdf
πProject https://ttxskk.github.io/AiOS/
πCode/Demo (announced)
β€6π1π1
This media is not supported in your browser
VIEW IN TELEGRAM
π MAVOS Object Segmentation π
πMAVOS is a transformer-based VOS w/ a novel, optimized and dynamic long-term modulated cross-attention memory. Code & Models announced (BSD 3-Clause)π
πReview https://t.ly/SKaRG
πPaper https://lnkd.in/dQyifKa3
πProject github.com/Amshaker/MAVOS
πMAVOS is a transformer-based VOS w/ a novel, optimized and dynamic long-term modulated cross-attention memory. Code & Models announced (BSD 3-Clause)π
πReview https://t.ly/SKaRG
πPaper https://lnkd.in/dQyifKa3
πProject github.com/Amshaker/MAVOS
π₯10π2β€1π₯°1
This media is not supported in your browser
VIEW IN TELEGRAM
π¦ ObjectDrop: automagical objects removal π¦
π#Google unveils ObjectDrop, the new SOTA in photorealistic object removal and insertion. Focus on shadows and reflections, impressive!
πReview https://t.ly/ZJ6NN
πPaper https://arxiv.org/pdf/2403.18818.pdf
πProject https://objectdrop.github.io/
π#Google unveils ObjectDrop, the new SOTA in photorealistic object removal and insertion. Focus on shadows and reflections, impressive!
πReview https://t.ly/ZJ6NN
πPaper https://arxiv.org/pdf/2403.18818.pdf
πProject https://objectdrop.github.io/
π14π€―8β€4π₯3πΎ2
This media is not supported in your browser
VIEW IN TELEGRAM
πͺΌ Universal Mono Metric Depth πͺΌ
πETH unveils UniDepth: metric 3D scenes from solely single images across domains. A novel, universal and flexible MMDE solution. Source code releasedπ
πReview https://t.ly/5C8eq
πPaper arxiv.org/pdf/2403.18913.pdf
πCode github.com/lpiccinelli-eth/unidepth
πETH unveils UniDepth: metric 3D scenes from solely single images across domains. A novel, universal and flexible MMDE solution. Source code releasedπ
πReview https://t.ly/5C8eq
πPaper arxiv.org/pdf/2403.18913.pdf
πCode github.com/lpiccinelli-eth/unidepth
π₯10π1π€£1
This media is not supported in your browser
VIEW IN TELEGRAM
π RELI11D: Multimodal Humans π
πRELI11D is the ultimate and high-quality multimodal human motion dataset involving LiDAR, IMU system, RGB camera, and Event camera. Dataset & Source Code to be released soonπ
πReview https://t.ly/5EG6X
πPaper https://lnkd.in/ep6Utcik
πProject https://lnkd.in/eDhNHYBb
πRELI11D is the ultimate and high-quality multimodal human motion dataset involving LiDAR, IMU system, RGB camera, and Event camera. Dataset & Source Code to be released soonπ
πReview https://t.ly/5EG6X
πPaper https://lnkd.in/ep6Utcik
πProject https://lnkd.in/eDhNHYBb
β€3π₯2
This media is not supported in your browser
VIEW IN TELEGRAM
π₯ ECoDepth: SOTA Diffusive Mono-Depth π₯
πNew SIDE model using a diffusion backbone conditioned on ViT embeddings. It's the new SOTA in SIDE. Source Code released π
πReview https://t.ly/s2pbB
πPaper https://lnkd.in/eYt5yr_q
πCode https://lnkd.in/eEcyPQcd
πNew SIDE model using a diffusion backbone conditioned on ViT embeddings. It's the new SOTA in SIDE. Source Code released π
πReview https://t.ly/s2pbB
πPaper https://lnkd.in/eYt5yr_q
πCode https://lnkd.in/eEcyPQcd
π₯11π4β€3β‘1
AI with Papers - Artificial Intelligence & Deep Learning
π¦ DINO-based Video Tracking π¦ πThe Weizmann Institute announced the new SOTA in point-tracking via pre-trained DINO features. Source code announced (not yet released)π πReview https://t.ly/_GIMT πPaper https://lnkd.in/dsGVDcar πProject dino-tracker.github.io/β¦
GitHub
GitHub - AssafSinger94/dino-tracker: Official Pytorch Implementation for βDINO-Tracker: Taming DINO for Self-Supervised Point Trackingβ¦
Official Pytorch Implementation for βDINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Videoβ (ECCV 2024) - AssafSinger94/dino-tracker
π10β€2
This media is not supported in your browser
VIEW IN TELEGRAM
π·οΈ Gen-NeRF2NeRF Translation π·οΈ
πGenN2N: unified NeRF-to-NeRF translation for editing tasks such as text-driven NeRF editing, colorization, super-resolution, inpainting, etc.
πReview https://t.ly/VMWAH
πPaper arxiv.org/pdf/2404.02788.pdf
πProject xiangyueliu.github.io/GenN2N/
πCode github.com/Lxiangyue/GenN2N
πGenN2N: unified NeRF-to-NeRF translation for editing tasks such as text-driven NeRF editing, colorization, super-resolution, inpainting, etc.
πReview https://t.ly/VMWAH
πPaper arxiv.org/pdf/2404.02788.pdf
πProject xiangyueliu.github.io/GenN2N/
πCode github.com/Lxiangyue/GenN2N
π€―4β€3π₯°1
This media is not supported in your browser
VIEW IN TELEGRAM
πiSeg: Interactive 3D Segmentationπ
π iSeg: interactive segmentation technique for 3D shapes operating entirely in 3D. It accepts both positive/negative clicks directly on the shape's surface, indicating inclusion & exclusion of regions.
πReview https://t.ly/tyFnD
πPaper https://lnkd.in/dydAz8zp
πProject https://lnkd.in/de-h6SRi
πCode (coming)
π iSeg: interactive segmentation technique for 3D shapes operating entirely in 3D. It accepts both positive/negative clicks directly on the shape's surface, indicating inclusion & exclusion of regions.
πReview https://t.ly/tyFnD
πPaper https://lnkd.in/dydAz8zp
πProject https://lnkd.in/de-h6SRi
πCode (coming)
β€7π2π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
π Neural Bodies with Clothes π
πNeural-ABC is a novel parametric model based on neural implicit functions that can represent clothed human bodies with disentangled latent spaces for ID, clothing, shape, and pose.
πReview https://t.ly/Un1wc
πProject https://lnkd.in/dhDG6FF5
πPaper https://lnkd.in/dhcfK7jZ
πCode https://lnkd.in/dQvXWysP
πNeural-ABC is a novel parametric model based on neural implicit functions that can represent clothed human bodies with disentangled latent spaces for ID, clothing, shape, and pose.
πReview https://t.ly/Un1wc
πProject https://lnkd.in/dhDG6FF5
πPaper https://lnkd.in/dhcfK7jZ
πCode https://lnkd.in/dQvXWysP
π₯7π2π1
This media is not supported in your browser
VIEW IN TELEGRAM
π BodyMAP: human body & pressure π
π#Nvidia (+CMU) unveils BodyMAP, the new SOTA in predicting body mesh (3D pose & shape) and 3D applied pressure on the human body. Source Code released, Dataset coming π
πReview https://t.ly/8926S
πProject bodymap3d.github.io/
πPaper https://lnkd.in/gCxH4ev3
πCode https://lnkd.in/gaifdy3q
π#Nvidia (+CMU) unveils BodyMAP, the new SOTA in predicting body mesh (3D pose & shape) and 3D applied pressure on the human body. Source Code released, Dataset coming π
πReview https://t.ly/8926S
πProject bodymap3d.github.io/
πPaper https://lnkd.in/gCxH4ev3
πCode https://lnkd.in/gaifdy3q
β€8π€―4β‘1π1π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
π§ XComposer2: 4K Vision-Language π§
πInternLMXComposer2-4KHD brings LVLM resolution capabilities up to 4K HD (3840Γ1600) and beyond. Authors: Shanghai AI Lab, CUHK, SenseTime & Tsinghua. Source Code & Models released π
πReview https://t.ly/GCHsz
πPaper arxiv.org/pdf/2404.06512.pdf
πCode github.com/InternLM/InternLM-XComposer
πInternLMXComposer2-4KHD brings LVLM resolution capabilities up to 4K HD (3840Γ1600) and beyond. Authors: Shanghai AI Lab, CUHK, SenseTime & Tsinghua. Source Code & Models released π
πReview https://t.ly/GCHsz
πPaper arxiv.org/pdf/2404.06512.pdf
πCode github.com/InternLM/InternLM-XComposer
π₯°7β‘2π1
This media is not supported in your browser
VIEW IN TELEGRAM
βοΈ Flying w/ Photons: Neural Render βοΈ
πNovel neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Pico-Seconds time resolution!
πReview https://t.ly/ZqL3a
πPaper arxiv.org/pdf/2404.06493.pdf
πProject anaghmalik.com/FlyingWithPhotons/
πCode github.com/anaghmalik/FlyingWithPhotons
πNovel neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Pico-Seconds time resolution!
πReview https://t.ly/ZqL3a
πPaper arxiv.org/pdf/2404.06493.pdf
πProject anaghmalik.com/FlyingWithPhotons/
πCode github.com/anaghmalik/FlyingWithPhotons
π€―6β‘3β€2π1π€£1
This media is not supported in your browser
VIEW IN TELEGRAM
βοΈ Tracking Any 2D Pixels in 3D βοΈ
π SpatialTracker lifts 2D pixels to 3D using monocular depth, represents the 3D content of each frame efficiently using a triplane representation, and performs iterative updates using a transformer to estimate 3D trajectories.
πReview https://t.ly/B28Cj
πPaper https://lnkd.in/d8ers_nm
πProject https://lnkd.in/deHjtZuE
πCode https://lnkd.in/dMe3TvFT
π SpatialTracker lifts 2D pixels to 3D using monocular depth, represents the 3D content of each frame efficiently using a triplane representation, and performs iterative updates using a transformer to estimate 3D trajectories.
πReview https://t.ly/B28Cj
πPaper https://lnkd.in/d8ers_nm
πProject https://lnkd.in/deHjtZuE
πCode https://lnkd.in/dMe3TvFT
β€10π₯5β‘1π1
This media is not supported in your browser
VIEW IN TELEGRAM
πͺYOLO-CIANNA: Neural Astroπͺ
π CIANNA is a general-purpose deep learning framework for (but not only for) astronomical data analysis. Source Code released π
πReview https://t.ly/441XS
πPaper arxiv.org/pdf/2402.05925.pdf
πCode github.com/Deyht/CIANNA
πWiki github.com/Deyht/CIANNA/wiki
π CIANNA is a general-purpose deep learning framework for (but not only for) astronomical data analysis. Source Code released π
πReview https://t.ly/441XS
πPaper arxiv.org/pdf/2402.05925.pdf
πCode github.com/Deyht/CIANNA
πWiki github.com/Deyht/CIANNA/wiki
π7β‘5β€4π₯2π₯°2