🛋️🛋️ 100% Accurated #3D Labeling 🛋️🛋️
👉#Amazon unveils a novel tool for fine-grained 3D part labeling. Up to 100% accuracy! Paper only😢
😎Review https://bit.ly/3kYpQHQ
😎Paper https://arxiv.org/pdf/2301.10460.pdf
👉#Amazon unveils a novel tool for fine-grained 3D part labeling. Up to 100% accuracy! Paper only😢
😎Review https://bit.ly/3kYpQHQ
😎Paper https://arxiv.org/pdf/2301.10460.pdf
This media is not supported in your browser
VIEW IN TELEGRAM
💧FLOW360: 360° Neural Optical Flow💧
👉 The first perceptually realistic 360° video benchmark dataset + SLOF method for OF tracking
😎Review https://bit.ly/3wMZZoX
😎Paper arxiv.org/pdf/2301.11880.pdf
😎Project https://siamlof.github.io
👉 The first perceptually realistic 360° video benchmark dataset + SLOF method for OF tracking
😎Review https://bit.ly/3wMZZoX
😎Paper arxiv.org/pdf/2301.11880.pdf
😎Project https://siamlof.github.io
This media is not supported in your browser
VIEW IN TELEGRAM
🐓DREAMIX:General Diffusive Video Editor🐓
👉#Google unveils the first diffusion-based method able to perform text-based motion/appearance editing of general videos
😎Review https://bit.ly/3I3Hq6B
😎Paper arxiv.org/pdf/2302.01329.pdf
😎Project dreamix-video-editing.github.io/
👉#Google unveils the first diffusion-based method able to perform text-based motion/appearance editing of general videos
😎Review https://bit.ly/3I3Hq6B
😎Paper arxiv.org/pdf/2302.01329.pdf
😎Project dreamix-video-editing.github.io/
This media is not supported in your browser
VIEW IN TELEGRAM
🧩 Text-Guided #3D Texturing 🧩
👉 Text-Guided HQ textures via iterative diffusion-based process
😎Review https://bit.ly/3ldC6Ez
😎Project texturepaper.github.io/TEXTurePaper
😎Code github.com/TEXTurePaper/TEXTurePaper
😎Paper texturepaper.github.io/TEXTurePaper/static/paper.pdf
👉 Text-Guided HQ textures via iterative diffusion-based process
😎Review https://bit.ly/3ldC6Ez
😎Project texturepaper.github.io/TEXTurePaper
😎Code github.com/TEXTurePaper/TEXTurePaper
😎Paper texturepaper.github.io/TEXTurePaper/static/paper.pdf
This media is not supported in your browser
VIEW IN TELEGRAM
🦚 MOSE: coMplex video Object SEgmentation 🦚
👉Novel Dataset for VOS is out! SOTA method on DAVIS is only 59.4% on MOSE
😎Review https://bit.ly/40yzSzW
😎Paper arxiv.org/pdf/2302.01872.pdf
😎Project henghuiding.github.io/MOSE/
😎Code github.com/henghuiding/MOSE-api
👉Novel Dataset for VOS is out! SOTA method on DAVIS is only 59.4% on MOSE
😎Review https://bit.ly/40yzSzW
😎Paper arxiv.org/pdf/2302.01872.pdf
😎Project henghuiding.github.io/MOSE/
😎Code github.com/henghuiding/MOSE-api
This media is not supported in your browser
VIEW IN TELEGRAM
🌘 Gen-1: next-gen Generative #AI 🌘
👉#Runway unveils Gen-1: the next step forward for Generative AI. Registration available for beta -> hurry up!
😎Review https://bit.ly/3YqQYh8
😎Paper arxiv.org/pdf/2302.03011.pdf
😎Project https://research.runwayml.com/gen1
👉#Runway unveils Gen-1: the next step forward for Generative AI. Registration available for beta -> hurry up!
😎Review https://bit.ly/3YqQYh8
😎Paper arxiv.org/pdf/2302.03011.pdf
😎Project https://research.runwayml.com/gen1
This media is not supported in your browser
VIEW IN TELEGRAM
🗿DirectMHP: Multi-Head Pose Estimation🗿
👉Novel E2E multi-person head pose estimation (MPHPE) under full-range angles
😎Review https://bit.ly/3HJubXg
😎Paper arxiv.org/pdf/2302.01110.pdf
😎Code github.com/hnuzhy/DirectMHP
👉Novel E2E multi-person head pose estimation (MPHPE) under full-range angles
😎Review https://bit.ly/3HJubXg
😎Paper arxiv.org/pdf/2302.01110.pdf
😎Code github.com/hnuzhy/DirectMHP
This media is not supported in your browser
VIEW IN TELEGRAM
🧱 LEGO-Net: Objects in Rooms 🧱
👉Transformer-based iterative method for rearrangement of objects in messy rooms
😎Review https://bit.ly/3HR0fs6
😎Paper arxiv.org/pdf/2301.09629.pdf
😎Project ivl.cs.brown.edu/#/projects/lego-net
👉Transformer-based iterative method for rearrangement of objects in messy rooms
😎Review https://bit.ly/3HR0fs6
😎Paper arxiv.org/pdf/2301.09629.pdf
😎Project ivl.cs.brown.edu/#/projects/lego-net
This media is not supported in your browser
VIEW IN TELEGRAM
🎃 In-N-Out: 3D-aware OOD video editing 🎃
👉Novel 3D-aware video editing able to manipulate OOD objects (e.g. heavy makeup, accessories)
😎Review https://bit.ly/3jN0CMu
😎Paper arxiv.org/pdf/2302.04871.pdf
😎Project https://in-n-out-3d.github.io
👉Novel 3D-aware video editing able to manipulate OOD objects (e.g. heavy makeup, accessories)
😎Review https://bit.ly/3jN0CMu
😎Paper arxiv.org/pdf/2302.04871.pdf
😎Project https://in-n-out-3d.github.io
This media is not supported in your browser
VIEW IN TELEGRAM
🥸 MEGANE: Generative Morphable Eyeglass 🥸
👉#META unveils the most advanced #3D compositional morphable AI for eyeglasses (HD geometry/photometric interaction)
😎Review https://bit.ly/3jOWifu
😎Paper arxiv.org/pdf/2302.04868.pdf
😎Project junxuan-li.github.io/megane
👉#META unveils the most advanced #3D compositional morphable AI for eyeglasses (HD geometry/photometric interaction)
😎Review https://bit.ly/3jOWifu
😎Paper arxiv.org/pdf/2302.04868.pdf
😎Project junxuan-li.github.io/megane
This media is not supported in your browser
VIEW IN TELEGRAM
💘 3D-aware Blending with NeRF 💘
👉Novel 3D-aware blending method via generative NeRFs
😎Review https://bit.ly/3lBEJA2
😎Paper arxiv.org/pdf/2302.06608.pdf
😎Project blandocs.github.io/blendnerf
😎Code github.com/naver-ai/BlendNeRF
👉Novel 3D-aware blending method via generative NeRFs
😎Review https://bit.ly/3lBEJA2
😎Paper arxiv.org/pdf/2302.06608.pdf
😎Project blandocs.github.io/blendnerf
😎Code github.com/naver-ai/BlendNeRF
This media is not supported in your browser
VIEW IN TELEGRAM
🌅 Semantics-guided natural synthesis 🌅
👉Alibaba #AI unveils a novel semantics-guided synthesis of natural scenes
😎Review https://bit.ly/4115MVJ
😎Paper arxiv.org/pdf/2302.07224.pdf
😎Project zju3dv.github.io/paintingnature
👉Alibaba #AI unveils a novel semantics-guided synthesis of natural scenes
😎Review https://bit.ly/4115MVJ
😎Paper arxiv.org/pdf/2302.07224.pdf
😎Project zju3dv.github.io/paintingnature
This media is not supported in your browser
VIEW IN TELEGRAM
🦞 SOTA ALERT: YOWOv2 is out! 🦞
👉 The 2nd-gen of YOWO, real-time detection of spatio-temporal actions
😎Review https://bit.ly/3IscY60
😎Paper arxiv.org/pdf/2302.06848v1.pdf
😎Code github.com/yjh0410/YOWOv2
👉 The 2nd-gen of YOWO, real-time detection of spatio-temporal actions
😎Review https://bit.ly/3IscY60
😎Paper arxiv.org/pdf/2302.06848v1.pdf
😎Code github.com/yjh0410/YOWOv2
This media is not supported in your browser
VIEW IN TELEGRAM
📬 DIVOTrack: crossview MOT dataset 📬
👉 DIVOTrack + CrossMOT: the ultimate solution for MOT in realistic scenario
😎Review https://bit.ly/3YSFZgL
😎Paper arxiv.org/pdf/2302.07676.pdf
😎Code github.com/shengyuhao/DIVOTrack
👉 DIVOTrack + CrossMOT: the ultimate solution for MOT in realistic scenario
😎Review https://bit.ly/3YSFZgL
😎Paper arxiv.org/pdf/2302.07676.pdf
😎Code github.com/shengyuhao/DIVOTrack
This media is not supported in your browser
VIEW IN TELEGRAM
🦩 One-Shot Face via LSs of StyleGAN2 🦩
👉 Novel video generation framework with edits, facial motions, deformations & identity
😎Review https://bit.ly/3xuChhF
😎Paper arxiv.org/pdf/2302.07848.pdf
😎Project trevineoorloff.github.io/FaceVideoReenactment_HybridLatents.io/
👉 Novel video generation framework with edits, facial motions, deformations & identity
😎Review https://bit.ly/3xuChhF
😎Paper arxiv.org/pdf/2302.07848.pdf
😎Project trevineoorloff.github.io/FaceVideoReenactment_HybridLatents.io/
This media is not supported in your browser
VIEW IN TELEGRAM
🌶️ 3D-aware conditional generative AI 🌶️
👉 Pix2Pix3D: 3D-aware conditional generative AI for controllable photorealistic synthesis
😎Review https://bit.ly/3I80MWS
😎Paper arxiv.org/pdf/2302.08509.pdf
😎Project www.cs.cmu.edu/~pix2pix3D
😎Code github.com/dunbar12138/pix2pix3D
👉 Pix2Pix3D: 3D-aware conditional generative AI for controllable photorealistic synthesis
😎Review https://bit.ly/3I80MWS
😎Paper arxiv.org/pdf/2302.08509.pdf
😎Project www.cs.cmu.edu/~pix2pix3D
😎Code github.com/dunbar12138/pix2pix3D
This media is not supported in your browser
VIEW IN TELEGRAM
🛡️ TPV: Tesla's O-Net competitor 🛡️
👉From Beijing an open-source approach for vision-centric autonomous driving #3D perception
😎Review https://bit.ly/3lNvVYc
😎Paper arxiv.org/pdf/2302.07817.pdf
😎Code github.com/wzzheng/TPVFormer
👉From Beijing an open-source approach for vision-centric autonomous driving #3D perception
😎Review https://bit.ly/3lNvVYc
😎Paper arxiv.org/pdf/2302.07817.pdf
😎Code github.com/wzzheng/TPVFormer
🏀 #NBA Mixed Reality is NUTS 🏀
👉The premiere of the streaming app of the #NBA is totally INSANE. A mix of #AI, CG and much more👇
🏀More: https://bit.ly/3IJ3uUp
👉The premiere of the streaming app of the #NBA is totally INSANE. A mix of #AI, CG and much more👇
🏀More: https://bit.ly/3IJ3uUp
This media is not supported in your browser
VIEW IN TELEGRAM
🫳 Neural Relighting of Hands 🫴
👉#META unveil the first neural relighting for personalized hands in real-time under novel illumination
😎Review https://bit.ly/3SblmKC
😎Paper arxiv.org/pdf/2302.04866.pdf
😎Project sh8.io/#/relightable_hands
👉#META unveil the first neural relighting for personalized hands in real-time under novel illumination
😎Review https://bit.ly/3SblmKC
😎Paper arxiv.org/pdf/2302.04866.pdf
😎Project sh8.io/#/relightable_hands
This media is not supported in your browser
VIEW IN TELEGRAM
🪁 VoxFormer: 2D->#3D Voxel ViT🪁
👉#Nvidia VoxFormer: #3D volumetric semantics from 2D images
😎Review https://bit.ly/3Kw9Yab
😎Paper arxiv.org/pdf/2302.12251.pdf
😎Code github.com/NVlabs/VoxFormer
👉#Nvidia VoxFormer: #3D volumetric semantics from 2D images
😎Review https://bit.ly/3Kw9Yab
😎Paper arxiv.org/pdf/2302.12251.pdf
😎Code github.com/NVlabs/VoxFormer