This media is not supported in your browser
VIEW IN TELEGRAM
⚛️ Flying w/ Photons: Neural Render ⚛️
👉Novel neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Pico-Seconds time resolution!
👉Review https://t.ly/ZqL3a
👉Paper arxiv.org/pdf/2404.06493.pdf
👉Project anaghmalik.com/FlyingWithPhotons/
👉Code github.com/anaghmalik/FlyingWithPhotons
👉Novel neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Pico-Seconds time resolution!
👉Review https://t.ly/ZqL3a
👉Paper arxiv.org/pdf/2404.06493.pdf
👉Project anaghmalik.com/FlyingWithPhotons/
👉Code github.com/anaghmalik/FlyingWithPhotons
🤯6⚡3❤2👍1🤣1
This media is not supported in your browser
VIEW IN TELEGRAM
☄️ Tracking Any 2D Pixels in 3D ☄️
👉 SpatialTracker lifts 2D pixels to 3D using monocular depth, represents the 3D content of each frame efficiently using a triplane representation, and performs iterative updates using a transformer to estimate 3D trajectories.
👉Review https://t.ly/B28Cj
👉Paper https://lnkd.in/d8ers_nm
👉Project https://lnkd.in/deHjtZuE
👉Code https://lnkd.in/dMe3TvFT
👉 SpatialTracker lifts 2D pixels to 3D using monocular depth, represents the 3D content of each frame efficiently using a triplane representation, and performs iterative updates using a transformer to estimate 3D trajectories.
👉Review https://t.ly/B28Cj
👉Paper https://lnkd.in/d8ers_nm
👉Project https://lnkd.in/deHjtZuE
👉Code https://lnkd.in/dMe3TvFT
❤10🔥5⚡1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🪐YOLO-CIANNA: Neural Astro🪐
👉 CIANNA is a general-purpose deep learning framework for (but not only for) astronomical data analysis. Source Code released 💙
👉Review https://t.ly/441XS
👉Paper arxiv.org/pdf/2402.05925.pdf
👉Code github.com/Deyht/CIANNA
👉Wiki github.com/Deyht/CIANNA/wiki
👉 CIANNA is a general-purpose deep learning framework for (but not only for) astronomical data analysis. Source Code released 💙
👉Review https://t.ly/441XS
👉Paper arxiv.org/pdf/2402.05925.pdf
👉Code github.com/Deyht/CIANNA
👉Wiki github.com/Deyht/CIANNA/wiki
👍7⚡5❤4🔥2🥰2
This media is not supported in your browser
VIEW IN TELEGRAM
🧤Neuro MusculoSkeletal-MANO🧤
👉SJTU unveils MusculoSkeletal-MANO, novel musculoskeletal system with a learnable parametric hand model. Source Code announced 💙
👉Review https://t.ly/HOQrn
👉Paper arxiv.org/pdf/2404.10227.pdf
👉Project https://ms-mano.robotflow.ai/
👉Code announced (no repo yet)
👉SJTU unveils MusculoSkeletal-MANO, novel musculoskeletal system with a learnable parametric hand model. Source Code announced 💙
👉Review https://t.ly/HOQrn
👉Paper arxiv.org/pdf/2404.10227.pdf
👉Project https://ms-mano.robotflow.ai/
👉Code announced (no repo yet)
🔥3⚡1❤1👍1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
⚽SoccerNET: Athlete Tracking⚽
👉SoccerNet Challenge is a novel high level computer vision task that is specific to sports analytics. It aims at recognizing the state of a sport game, i.e., identifying and localizing all sports individuals (players, referees, ..) on the field.
👉Review https://t.ly/Mdu9s
👉Paper arxiv.org/pdf/2404.11335.pdf
👉Code github.com/SoccerNet/sn-gamestate
👉SoccerNet Challenge is a novel high level computer vision task that is specific to sports analytics. It aims at recognizing the state of a sport game, i.e., identifying and localizing all sports individuals (players, referees, ..) on the field.
👉Review https://t.ly/Mdu9s
👉Paper arxiv.org/pdf/2404.11335.pdf
👉Code github.com/SoccerNet/sn-gamestate
❤9👍8🔥3⚡2🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🎲 Articulated Objs from MonoClips 🎲
👉REACTO is the new SOTA to address the challenge of reconstructing general articulated 3D objects from single monocular video
👉Review https://t.ly/REuM8
👉Paper https://lnkd.in/d6PWagij
👉Project https://lnkd.in/dpg3x4tm
👉Repo https://lnkd.in/dRZWj6_N
👉REACTO is the new SOTA to address the challenge of reconstructing general articulated 3D objects from single monocular video
👉Review https://t.ly/REuM8
👉Paper https://lnkd.in/d6PWagij
👉Project https://lnkd.in/dpg3x4tm
👉Repo https://lnkd.in/dRZWj6_N
🤯6👍1🔥1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🪼 All You Need is SAM (+Flow) 🪼
👉Oxford unveils the new SOTA for moving object segmentation via SAM + Optical Flow. Two novel models & Source Code announced 💙
👉Review https://t.ly/ZRYtp
👉Paper https://lnkd.in/d4XqkEGF
👉Project https://lnkd.in/dHpmx3FF
👉Repo coming: https://github.com/Jyxarthur/
👉Oxford unveils the new SOTA for moving object segmentation via SAM + Optical Flow. Two novel models & Source Code announced 💙
👉Review https://t.ly/ZRYtp
👉Paper https://lnkd.in/d4XqkEGF
👉Project https://lnkd.in/dHpmx3FF
👉Repo coming: https://github.com/Jyxarthur/
❤12👍7🔥2🤯2
This media is not supported in your browser
VIEW IN TELEGRAM
🛞 6Img-to-3D driving scenarios 🛞
👉EPFL (+ Continental) unveils 6Img-to-3D, novel transformer-based encoder-renderer method to create 3D onbounded outdoor driving scenarios with only six pics
👉Review https://shorturl.at/dZ018
👉Paper arxiv.org/pdf/2404.12378.pdf
👉Project 6img-to-3d.github.io/
👉Code github.com/continental/6Img-to-3D
👉EPFL (+ Continental) unveils 6Img-to-3D, novel transformer-based encoder-renderer method to create 3D onbounded outdoor driving scenarios with only six pics
👉Review https://shorturl.at/dZ018
👉Paper arxiv.org/pdf/2404.12378.pdf
👉Project 6img-to-3d.github.io/
👉Code github.com/continental/6Img-to-3D
🔥5❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🌹 Physics-Based 3D Video-Gen 🌹
👉PhysDreamer, a physics-based approach that leverages the object dynamics priors learned by video generation models. It enables realistic 3D interaction with objects
👉Review https://t.ly/zxXf9
👉Paper arxiv.org/pdf/2404.13026.pdf
👉Project physdreamer.github.io/
👉Code github.com/a1600012888/PhysDreamer
👉PhysDreamer, a physics-based approach that leverages the object dynamics priors learned by video generation models. It enables realistic 3D interaction with objects
👉Review https://t.ly/zxXf9
👉Paper arxiv.org/pdf/2404.13026.pdf
👉Project physdreamer.github.io/
👉Code github.com/a1600012888/PhysDreamer
👍14❤9🤯4👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🎡 NER-Net: Seeing at Night-Time 🎡
👉Huazhong (+Beijing) unveils a novel event-based nighttime imaging solution under non-uniform illumination, plus a paired multi-illumination level real-world dataset. Repo online, code coming 💙
👉Review https://t.ly/Z9JMJ
👉Paper arxiv.org/pdf/2404.11884.pdf
👉Repo github.com/Liu-haoyue/NER-Net
👉Clip https://www.youtube.com/watch?v=zpfTLCF1Kw4
👉Huazhong (+Beijing) unveils a novel event-based nighttime imaging solution under non-uniform illumination, plus a paired multi-illumination level real-world dataset. Repo online, code coming 💙
👉Review https://t.ly/Z9JMJ
👉Paper arxiv.org/pdf/2404.11884.pdf
👉Repo github.com/Liu-haoyue/NER-Net
👉Clip https://www.youtube.com/watch?v=zpfTLCF1Kw4
🤯3🔥2❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🌊 FlowMap: dense depth video 🌊
👉MIT (+CSAIL) unveils FlowMap, a novel E2E differentiable method that solves for precise camera poses, camera intrinsics, and perframe dense depth of a video sequence. Source Code released 💙
👉Review https://t.ly/CBH48
👉Paper arxiv.org/pdf/2404.15259.pdf
👉Project cameronosmith.github.io/flowmap
👉Code github.com/dcharatan/flowmap
👉MIT (+CSAIL) unveils FlowMap, a novel E2E differentiable method that solves for precise camera poses, camera intrinsics, and perframe dense depth of a video sequence. Source Code released 💙
👉Review https://t.ly/CBH48
👉Paper arxiv.org/pdf/2404.15259.pdf
👉Project cameronosmith.github.io/flowmap
👉Code github.com/dcharatan/flowmap
🔥18❤3👍2
This media is not supported in your browser
VIEW IN TELEGRAM
👗TELA: Text to 3D Clothed Human👗
👉 TELA is a novel approach for the new task of clothing disentangled 3D human model generation from texts. This novel approach unleashes the potential of many downstream applications (e.g., virtual try-on).
👉Review https://t.ly/6N7JV
👉Paper https://arxiv.org/pdf/2404.16748
👉Project https://jtdong.com/tela_layer/
👉Code https://github.com/DongJT1996/TELA
👉 TELA is a novel approach for the new task of clothing disentangled 3D human model generation from texts. This novel approach unleashes the potential of many downstream applications (e.g., virtual try-on).
👉Review https://t.ly/6N7JV
👉Paper https://arxiv.org/pdf/2404.16748
👉Project https://jtdong.com/tela_layer/
👉Code https://github.com/DongJT1996/TELA
👍5🔥4🤯3👏1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🪷 Tunnel Try-on: SOTA VTON 🪷
👉"Tunnel Try-on", the first diffusion-based video virtual try-on model that demonstrates SOTA performance in complex scenarios. No code announced :(
👉Review https://t.ly/joMtJ
👉Paper arxiv.org/pdf/2404.17571
👉Project mengtingchen.github.io/tunnel-try-on-page/
👉"Tunnel Try-on", the first diffusion-based video virtual try-on model that demonstrates SOTA performance in complex scenarios. No code announced :(
👉Review https://t.ly/joMtJ
👉Paper arxiv.org/pdf/2404.17571
👉Project mengtingchen.github.io/tunnel-try-on-page/
❤9🔥4👍1🥰1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🏝️1000x Scalable Neural 3D Fields🏝️
👉Highly-scalable neural 3D Fields: 1000x reductions in memory maintaining speed/quality: 10 MB vs. 10 GB! Code released 💙
👉Review https://t.ly/sLTK5
👉Paper https://lnkd.in/dEYM8-t2
👉Project https://lnkd.in/djptdujx
👉Code https://lnkd.in/dcCnFZ2n
👉Highly-scalable neural 3D Fields: 1000x reductions in memory maintaining speed/quality: 10 MB vs. 10 GB! Code released 💙
👉Review https://t.ly/sLTK5
👉Paper https://lnkd.in/dEYM8-t2
👉Project https://lnkd.in/djptdujx
👉Code https://lnkd.in/dcCnFZ2n
🤯13👍5🔥4❤3🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🌐3D Scenes w/ Depth Inpainting🌐
👉Oxford announced two novel contributions to the field of 3D scene generation: a new benchmark and a novel depth completion model. 🤗-Demo and Source Code released💙
👉Review https://t.ly/BKiny
👉Paper arxiv.org/pdf/2404.19758
👉Project research.paulengstler.com/invisible-stitch/
👉Code github.com/paulengstler/invisible-stitch
👉Demo huggingface.co/spaces/paulengstler/invisible-stitch
👉Oxford announced two novel contributions to the field of 3D scene generation: a new benchmark and a novel depth completion model. 🤗-Demo and Source Code released💙
👉Review https://t.ly/BKiny
👉Paper arxiv.org/pdf/2404.19758
👉Project research.paulengstler.com/invisible-stitch/
👉Code github.com/paulengstler/invisible-stitch
👉Demo huggingface.co/spaces/paulengstler/invisible-stitch
❤3👏2👍1🔥1🥰1🤯1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🌊 Diffusive 3D Human Recovery 🌊
👉The Rutgers University unveils ScoreHMR at #CVPR24; novel approach for 3D human pose and shape reconstruction. Impressive results.
👉Review https://t.ly/G0k2D
👉Paper https://arxiv.org/pdf/2403.09623
👉Code https://github.com/statho/ScoreHMR
👉Project https://statho.github.io/ScoreHMR/
👉The Rutgers University unveils ScoreHMR at #CVPR24; novel approach for 3D human pose and shape reconstruction. Impressive results.
👉Review https://t.ly/G0k2D
👉Paper https://arxiv.org/pdf/2403.09623
👉Code https://github.com/statho/ScoreHMR
👉Project https://statho.github.io/ScoreHMR/
🤯11👍6❤1👏1🤣1
This media is not supported in your browser
VIEW IN TELEGRAM
🏷️DiffMOT (#CVPR24): diffusion-MOT🏷️
👉DiffMOT is a novel real-time diffusion-based MOT approach to tackle the complex nonlinear motion. Impressive results & Source Code released💙
👉Review https://t.ly/ztlHi
👉Paper https://lnkd.in/d4K3c-nt
👉Project https://diffmot.github.io/
👉Code github.com/Kroery/DiffMOT
👉DiffMOT is a novel real-time diffusion-based MOT approach to tackle the complex nonlinear motion. Impressive results & Source Code released💙
👉Review https://t.ly/ztlHi
👉Paper https://lnkd.in/d4K3c-nt
👉Project https://diffmot.github.io/
👉Code github.com/Kroery/DiffMOT
❤12👍4🔥3🤯3
This media is not supported in your browser
VIEW IN TELEGRAM
🍏 XFeat: Neural Features Matching 🍏
👉XFeat (Accelerated Features) is lightweight/accurate architecture for efficient visual correspondence. It revisits fundamental design choices in CNN for detecting, extracting & matching local features
👉Review https://t.ly/ppb38
👉Paper arxiv.org/pdf/2404.19174
👉Code https://lnkd.in/dFzTpzN8
👉Project https://lnkd.in/d8JnV-iu
👉XFeat (Accelerated Features) is lightweight/accurate architecture for efficient visual correspondence. It revisits fundamental design choices in CNN for detecting, extracting & matching local features
👉Review https://t.ly/ppb38
👉Paper arxiv.org/pdf/2404.19174
👉Code https://lnkd.in/dFzTpzN8
👉Project https://lnkd.in/d8JnV-iu
❤17🤯6⚡3👏1🍾1
🦑 Hyper-Detailed Image Descriptions 🦑
👉#Google unveils ImageInWords (IIW), a carefully designed HIL annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process
👉Review https://t.ly/engkl
👉Paper arxiv.org/pdf/2405.02793
👉Repo github.com/google/imageinwords
👉Project google.github.io/imageinwords
👉Data huggingface.co/datasets/google/imageinwords
👉#Google unveils ImageInWords (IIW), a carefully designed HIL annotation framework for curating hyper-detailed image descriptions and a new dataset resulting from this process
👉Review https://t.ly/engkl
👉Paper arxiv.org/pdf/2405.02793
👉Repo github.com/google/imageinwords
👉Project google.github.io/imageinwords
👉Data huggingface.co/datasets/google/imageinwords
❤11🔥3👍2🤯2🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🔫 Free-Moving Reconstruction 🔫
👉EPFL (+#MagicLeap) unveils a novel approach for reconstructing free-moving object from monocular RGB clip. Free interaction with objects in front of a moving cam without relying on any prior, and optimizes the sequence globally without any segments. Great but no code announced🥺
👉Review https://t.ly/2xhtj
👉Paper arxiv.org/pdf/2405.05858
👉Project haixinshi.github.io/fmov/
👉EPFL (+#MagicLeap) unveils a novel approach for reconstructing free-moving object from monocular RGB clip. Free interaction with objects in front of a moving cam without relying on any prior, and optimizes the sequence globally without any segments. Great but no code announced🥺
👉Review https://t.ly/2xhtj
👉Paper arxiv.org/pdf/2405.05858
👉Project haixinshi.github.io/fmov/
👍6🤯4⚡1❤1🥰1