This media is not supported in your browser
VIEW IN TELEGRAM
ðĐâð HD Avatar via Text & Pose ðĐâð
ð Generating expressive #3D avatars from nothing but text descriptions & pose guidance
ðReview https://t.ly/wrSMH
ðPaper arxiv.org/pdf/2308.03610.pdf
ðProject avatarverse3d.github.io
ð Generating expressive #3D avatars from nothing but text descriptions & pose guidance
ðReview https://t.ly/wrSMH
ðPaper arxiv.org/pdf/2308.03610.pdf
ðProject avatarverse3d.github.io
âĪ7ðĨ°4ð1ðĪŊ1
This media is not supported in your browser
VIEW IN TELEGRAM
ð Controllable Synthetic Data (extending Image-Net) ð
ð#META's PUG, a new generation of interactive environments for representation learning. Extending Image-Net!
ðReview https://t.ly/nCYs0
ðPaper arxiv.org/pdf/2308.03977.pdf
ðProject pug.metademolab.com
ðCode github.com/facebookresearch/PUG
ð#META's PUG, a new generation of interactive environments for representation learning. Extending Image-Net!
ðReview https://t.ly/nCYs0
ðPaper arxiv.org/pdf/2308.03977.pdf
ðProject pug.metademolab.com
ðCode github.com/facebookresearch/PUG
ðĨ4âĪ2ð1ðĪĐ1
AI with Papers - Artificial Intelligence & Deep Learning
ð― Neuralangelo Digital Twins. INSANEð― ð A novel framework from #Nvidia for Hi-Fi 3D Digital twins. ðReview https://t.ly/rxoF4 ðProject research.nvidia.com/labs/dir/neuralangelo ðPaper research.nvidia.com/labs/dir/neuralangelo/paper.pdf
GitHub
GitHub - NVlabs/neuralangelo: Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023)
Official implementation of "Neuralangelo: High-Fidelity Neural Surface Reconstruction" (CVPR 2023) - NVlabs/neuralangelo
ðĨ11ð6âĪ2ðą1
This media is not supported in your browser
VIEW IN TELEGRAM
ð Tracking by Persistent Dynamic View Synthesis ð
ðNovel simultaneous addressing of dynamic scene novel-view synthesis + 6-DOF tracking of all dense scene elements
ðReview https://t.ly/Bc535
ðPaper arxiv.org/pdf/2308.09713.pdf
ðProject dynamic3dgaussians.github.io
ðCode github.com/JonathonLuiten/Dynamic3DGaussians
ðNovel simultaneous addressing of dynamic scene novel-view synthesis + 6-DOF tracking of all dense scene elements
ðReview https://t.ly/Bc535
ðPaper arxiv.org/pdf/2308.09713.pdf
ðProject dynamic3dgaussians.github.io
ðCode github.com/JonathonLuiten/Dynamic3DGaussians
ðĪŊ10ðĨ3ðą1
ð Digital Twins for AutoRetail Checkout ð
ðFrom #Nvidia a novel approach for using 3D assets for training 2D detection and tracking model in AutoRetail Checkout
ðReview https://t.ly/Ea7kt
ðPaper arxiv.org/pdf/2308.09708.pdf
ðCode github.com/yorkeyao/Automated-Retail-Checkout
ðFrom #Nvidia a novel approach for using 3D assets for training 2D detection and tracking model in AutoRetail Checkout
ðReview https://t.ly/Ea7kt
ðPaper arxiv.org/pdf/2308.09708.pdf
ðCode github.com/yorkeyao/Automated-Retail-Checkout
ðĨ2ðĨ°2ðą2
This media is not supported in your browser
VIEW IN TELEGRAM
ðĨSportsMOT + MixSort = Sport MOTðĨ
ðNanjing just released a MOT dataset for sports scenes + the SOTA code/model for tracking (MixSort)
ðReview https://t.ly/NHUxL
ðPaper arxiv.org/pdf/2304.05170.pdf
ðCode github.com/MCG-NJU/MixSort
ðProject deeperaction.github.io/datasets/sportsmot.html
ðNanjing just released a MOT dataset for sports scenes + the SOTA code/model for tracking (MixSort)
ðReview https://t.ly/NHUxL
ðPaper arxiv.org/pdf/2304.05170.pdf
ðCode github.com/MCG-NJU/MixSort
ðProject deeperaction.github.io/datasets/sportsmot.html
ðĨ12ð2ðĪŊ2âĪ1ðĪĐ1
âĄïļFeature Matching at Light SpeedâĄïļ
ðLightGlue is a lightweight feature matcher with high accuracy and blazing fast inference
ðReview https://t.ly/jkecX
ðPaper arxiv.org/pdf/2306.13643.pdf
ðCode github.com/cvg/LightGlue
ðLightGlue is a lightweight feature matcher with high accuracy and blazing fast inference
ðReview https://t.ly/jkecX
ðPaper arxiv.org/pdf/2306.13643.pdf
ðCode github.com/cvg/LightGlue
âĪ23ðĨ6ðą4ð3âĄ2ðū1
This media is not supported in your browser
VIEW IN TELEGRAM
ðđïļ CoDeF: Video Content Deformation Fields ðđïļ
ðCoDeF is a new type of video representation for video-editing tasks
ðReview https://t.ly/PIVl-
ðPaper arxiv.org/pdf/2308.07926.pdf
ðProject https://qiuyu96.github.io/CoDeF
ðCode https://github.com/qiuyu96/CoDeF
ðCoDeF is a new type of video representation for video-editing tasks
ðReview https://t.ly/PIVl-
ðPaper arxiv.org/pdf/2308.07926.pdf
ðProject https://qiuyu96.github.io/CoDeF
ðCode https://github.com/qiuyu96/CoDeF
âĪ18ðĨ4ð2ðĨ°1ðĪŊ1ðą1
Hello everybody,
a lot of you asked me to open the comments to better enjoy the posts. I want to follow your suggestion, hope you will enjoy this new mood!
ðĨ NO SPAM
ðĨ NO COMMERCIAL
ðĨ NO UNRESPECTFUL MESSAGEs
ð§ĄJUST AI & SCIENCE
â ïļ BAN AT THE FIRST VIOLATION â ïļ
a lot of you asked me to open the comments to better enjoy the posts. I want to follow your suggestion, hope you will enjoy this new mood!
ðĨ NO SPAM
ðĨ NO COMMERCIAL
ðĨ NO UNRESPECTFUL MESSAGEs
ð§ĄJUST AI & SCIENCE
â ïļ BAN AT THE FIRST VIOLATION â ïļ
âĪ44ð28ðĨ6ð1ðĪŊ1ðū1
AI with Papers - Artificial Intelligence & Deep Learning pinned ÂŦHello everybody, a lot of you asked me to open the comments to better enjoy the posts. I want to follow your suggestion, hope you will enjoy this new mood! ðĨ NO SPAM ðĨ NO COMMERCIAL ðĨ NO UNRESPECTFUL MESSAGEs ð§ĄJUST AI & SCIENCE â ïļ BAN AT THE FIRSTâĶÂŧ
This media is not supported in your browser
VIEW IN TELEGRAM
ðĶ Instance-Level Semantics of Cells ðĶ
ðTYC: novel dataset for understanding instance-level semantics & motions of cells in microstructures
ðReview https://t.ly/y-4VZ
ðPaper arxiv.org/pdf/2308.12116.pdf
ðProject christophreich1996.github.io/tyc_dataset/
ðCode github.com/ChristophReich1996/TYC-Dataset
ðData tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3930
ðTYC: novel dataset for understanding instance-level semantics & motions of cells in microstructures
ðReview https://t.ly/y-4VZ
ðPaper arxiv.org/pdf/2308.12116.pdf
ðProject christophreich1996.github.io/tyc_dataset/
ðCode github.com/ChristophReich1996/TYC-Dataset
ðData tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3930
ð8ðĨ3âĪ1âĄ1ðĪŊ1
This media is not supported in your browser
VIEW IN TELEGRAM
ðĩPOCO: 3D HPS + Confidenceðĩ
ð Novel framework for HPS: #3D human body + confidence in a single feed-forward pass
ðReview https://t.ly/cDePe
ðPaper arxiv.org/pdf/2308.12965.pdf
ðProject https://poco.is.tue.mpg.de
ð Novel framework for HPS: #3D human body + confidence in a single feed-forward pass
ðReview https://t.ly/cDePe
ðPaper arxiv.org/pdf/2308.12965.pdf
ðProject https://poco.is.tue.mpg.de
ðĨ5ð3âĪ2ðĪŊ1ðą1
This media is not supported in your browser
VIEW IN TELEGRAM
ð NeO360: NeRF for Sparse Outdoor ð
ð#Toyota (+GIT) unveils NeO360: 360âĶ outdoor scenes from a single or a few posed RGB images
ðReview https://t.ly/JDJZg
ðPaper arxiv.org/pdf/2308.12967.pdf
ðProject zubair-irshad.github.io/projects/neo360.html
ð#Toyota (+GIT) unveils NeO360: 360âĶ outdoor scenes from a single or a few posed RGB images
ðReview https://t.ly/JDJZg
ðPaper arxiv.org/pdf/2308.12967.pdf
ðProject zubair-irshad.github.io/projects/neo360.html
âĪ13ð3ðĨ2ðĨ°1ðĪŊ1
This media is not supported in your browser
VIEW IN TELEGRAM
ðĨ Scenimefy: I-2-I for anime ðĨ
ðS-Lab unveils a novel semi-supervised I-2-I translation framework + HD dataset for anime
ðReview https://t.ly/IsdEG
ðPaper arxiv.org/pdf/2308.12968.pdf
ðCode https://github.com/Yuxinn-J/Scenimefy
ðProject https://yuxinn-j.github.io/projects/Scenimefy.html
ðS-Lab unveils a novel semi-supervised I-2-I translation framework + HD dataset for anime
ðReview https://t.ly/IsdEG
ðPaper arxiv.org/pdf/2308.12968.pdf
ðCode https://github.com/Yuxinn-J/Scenimefy
ðProject https://yuxinn-j.github.io/projects/Scenimefy.html
ðĨ°13âĪ2ðĨ1ðū1
This media is not supported in your browser
VIEW IN TELEGRAM
ðĻ Watch Your Steps: Editing by Text ðĻ
ðThe novel SOTA in image & scene (text) editing via denoising diffusion models
ðReview https://t.ly/fv9wn
ðPaper arxiv.org/pdf/2308.08947.pdf
ðProject ashmrz.github.io/WatchYourSteps
ðThe novel SOTA in image & scene (text) editing via denoising diffusion models
ðReview https://t.ly/fv9wn
ðPaper arxiv.org/pdf/2308.08947.pdf
ðProject ashmrz.github.io/WatchYourSteps
âĪ4ð3ðĪŊ3ðĨ1
This media is not supported in your browser
VIEW IN TELEGRAM
ðĄ Relighting NeRF ðĄ
ðNeural implicit radiance representation for free viewpoint relighting of an object lit by a moving point light
ðReview https://t.ly/J-3_L
ðProject nrhints.github.io
ðCode github.com/iamNCJ/NRHints
ðPaper nrhints.github.io/pdfs/nrhints-sig23.pdf
ðNeural implicit radiance representation for free viewpoint relighting of an object lit by a moving point light
ðReview https://t.ly/J-3_L
ðProject nrhints.github.io
ðCode github.com/iamNCJ/NRHints
ðPaper nrhints.github.io/pdfs/nrhints-sig23.pdf
ðĪŊ3ð2âĪ1âĄ1ðĨ1
This media is not supported in your browser
VIEW IN TELEGRAM
ðŠķ ReST: Multi-Camera MOT ðŠķ
ðNovel reconfigurable two-steps graph model for multi-camera multi object video tracking (MC-MOT)
ðReview https://t.ly/3C5tb
ðPaper arxiv.org/pdf/2308.13229.pdf
ðCode github.com/chengche6230/ReST
ðNovel reconfigurable two-steps graph model for multi-camera multi object video tracking (MC-MOT)
ðReview https://t.ly/3C5tb
ðPaper arxiv.org/pdf/2308.13229.pdf
ðCode github.com/chengche6230/ReST
ðĨ7âĪ3ðĪĐ2
This media is not supported in your browser
VIEW IN TELEGRAM
ðēMagicEdit: Magic Video Editðē
ðMagicEdit: explicit disentangling content, structure & motion for Hi-Fi and temporally coherent video editing
ðReport https://t.ly/tREX4
ðPaper arxiv.org/pdf/2308.14749.pdf
ðProject magic-edit.github.io
ðCode github.com/magic-research/magic-edit
ðMagicEdit: explicit disentangling content, structure & motion for Hi-Fi and temporally coherent video editing
ðReport https://t.ly/tREX4
ðPaper arxiv.org/pdf/2308.14749.pdf
ðProject magic-edit.github.io
ðCode github.com/magic-research/magic-edit
ðĨ°8âĪ4ð3ðĨ1ðą1ðĪĐ1
This media is not supported in your browser
VIEW IN TELEGRAM
âïļ VideoCutLER: Simple UVIS âïļ
ðVideoCutLER is a simple unsupervised video instance segmentation (UVIS) method without relying on optical flows
ðReview https://t.ly/PBBjG
ðPaper arxiv.org/pdf/2308.14710.pdf
ðProject people.eecs.berkeley.edu/~xdwang/projects/CutLER
ðCode github.com/facebookresearch/CutLER/tree/main/videocutler
ðVideoCutLER is a simple unsupervised video instance segmentation (UVIS) method without relying on optical flows
ðReview https://t.ly/PBBjG
ðPaper arxiv.org/pdf/2308.14710.pdf
ðProject people.eecs.berkeley.edu/~xdwang/projects/CutLER
ðCode github.com/facebookresearch/CutLER/tree/main/videocutler
ðĨ8ð3âĪ2ðĪŊ1
This media is not supported in your browser
VIEW IN TELEGRAM
ðĶ 3D Pigeons Pose & Tracking ðĶ
ð 3D-MuPPET: estimate and track 3D poses of pigeons with multiple-views
ðReview https://t.ly/jfAJJ
ðPaper arxiv.org/pdf/2308.15316.pdf
ðCode github.com/alexhang212/3D-MuPPET/
ð 3D-MuPPET: estimate and track 3D poses of pigeons with multiple-views
ðReview https://t.ly/jfAJJ
ðPaper arxiv.org/pdf/2308.15316.pdf
ðCode github.com/alexhang212/3D-MuPPET/
ðĪĢ17ðĪŊ14ð4ðĨ°2âĪ1ðĪĐ1