This media is not supported in your browser
VIEW IN TELEGRAM
⛺ViTPose: Transformer for Pose⛺
👉ViTPose from ViTAE, ViT for human pose
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Plain/nonhierarchical ViT for pose
✅Deconv-layers after ViT for keypoints
✅Just the baseline is the new SOTA
✅Source code & models available soon!
More: https://bit.ly/3MJ0kz1
👉ViTPose from ViTAE, ViT for human pose
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Plain/nonhierarchical ViT for pose
✅Deconv-layers after ViT for keypoints
✅Just the baseline is the new SOTA
✅Source code & models available soon!
More: https://bit.ly/3MJ0kz1
👍5🤯4🔥1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🧳 Unsupervised HD Motion Transfer 🧳
👉Novel e2e unsupervised motion transfer for image animation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅TPS motion estimation + Dropout
✅Novel E2E unsupervised motion transfer
✅Optical flow + multi-res. occlusion mask
✅Code and models under MIT license
More: https://bit.ly/3MGNPns
👉Novel e2e unsupervised motion transfer for image animation
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅TPS motion estimation + Dropout
✅Novel E2E unsupervised motion transfer
✅Optical flow + multi-res. occlusion mask
✅Code and models under MIT license
More: https://bit.ly/3MGNPns
🔥8👍6🤯4❤2😱2
This media is not supported in your browser
VIEW IN TELEGRAM
🚤 Neural Self-Calibration in the wild 🚤
👉 Learning algorithm to regress calibration params from in the wild clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Params purely from self-supervision
✅S.S. depth/pose learning as objective
✅POV, fisheye, catadioptric: no changes
✅SOTA results on EuRoC MAV dataset
More: https://bit.ly/3w1n6LB
👉 Learning algorithm to regress calibration params from in the wild clips
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Params purely from self-supervision
✅S.S. depth/pose learning as objective
✅POV, fisheye, catadioptric: no changes
✅SOTA results on EuRoC MAV dataset
More: https://bit.ly/3w1n6LB
👍8🤩2🔥1🥰1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🦅 ConDor: S.S. Canonicalization 🦅
👉Self-Supervised Canonicalization for full/partial 3D points cloud
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅RRC + Stanford + KAIST + Brown
✅On top of Tensor Field Networks (TFNs)
✅Unseen 3D -> equivariant canonical
✅Co-segmentation, NO supervision
✅Code and model under MIT license
More: https://bit.ly/3MNDyGa
👉Self-Supervised Canonicalization for full/partial 3D points cloud
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅RRC + Stanford + KAIST + Brown
✅On top of Tensor Field Networks (TFNs)
✅Unseen 3D -> equivariant canonical
✅Co-segmentation, NO supervision
✅Code and model under MIT license
More: https://bit.ly/3MNDyGa
🔥4👍1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦀 Event-aided Direct Sparse Odometry 🦀
👉EDS: direct monocular visual odometry using events/frames
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Mono 6-DOF visual odometry + events
✅Direct photometric bundle adjustment
✅Camera motion tracking by sparse pixels
✅A new dataset with HQ events and frame
More: https://bit.ly/3s9FiBN
👉EDS: direct monocular visual odometry using events/frames
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Mono 6-DOF visual odometry + events
✅Direct photometric bundle adjustment
✅Camera motion tracking by sparse pixels
✅A new dataset with HQ events and frame
More: https://bit.ly/3s9FiBN
🔥5👍3🤯1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🫀BlobGAN: Blob-Disentangled Scene🫀
👉Unsupervised, mid-level (blobs) generation of scenes
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Spatial, depth-ordered Gaussian blobs
✅Reaching for supervised level, and more
✅Source under BSD-2 "Simplified" License
More: https://bit.ly/3kRyGnj
👉Unsupervised, mid-level (blobs) generation of scenes
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Spatial, depth-ordered Gaussian blobs
✅Reaching for supervised level, and more
✅Source under BSD-2 "Simplified" License
More: https://bit.ly/3kRyGnj
🔥8👍1🥰1🤯1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🦕E2EVE editor via pre-trained artist🦕
👉E2EVE generates a new version of the source image that resembles the "driver" one
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Blending regions by driver image
✅E2E cond-probability of the edits
✅S.S. augmenting in target domain
✅Implemented as SOTA transformer
✅Code/models available (soon)
More: https://bit.ly/3P9TDYW
👉E2EVE generates a new version of the source image that resembles the "driver" one
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Blending regions by driver image
✅E2E cond-probability of the edits
✅S.S. augmenting in target domain
✅Implemented as SOTA transformer
✅Code/models available (soon)
More: https://bit.ly/3P9TDYW
🤯5👍2🤩2❤1🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🐶 Bringing pets in #metaverse 🐶
👉ARTEMIS: pipeline for generating articulated neural pets for virtual worlds
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅ARTiculated, appEarance, Mo-synthesIS
✅Motion control, animation & rendering
✅Neural-generated (NGI) animal engine
✅SOTA animal mocap + neural control
More: https://bit.ly/3LZSLDU
👉ARTEMIS: pipeline for generating articulated neural pets for virtual worlds
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅ARTiculated, appEarance, Mo-synthesIS
✅Motion control, animation & rendering
✅Neural-generated (NGI) animal engine
✅SOTA animal mocap + neural control
More: https://bit.ly/3LZSLDU
❤4👍2🥰2🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
😍Animated hand in 1972, damn romantic😍
👉Q: is #VR the technology that developed least in the last 30 years? 🤔
More: https://bit.ly/3snxNaq
👉Q: is #VR the technology that developed least in the last 30 years? 🤔
More: https://bit.ly/3snxNaq
👍7❤3🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
⏏️Ensembling models for GAN training⏏️
👉Pretrained vision models to improve the GAN training. FID by 1.5 to 2×!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅CV models as ensemble of discriminators
✅Improving GAN in limited / large-scale set
✅10k samples matches StyleGAN2 w/ 1.6M
✅Source code / models under MIT license
More: https://bit.ly/3wgUVsr
👉Pretrained vision models to improve the GAN training. FID by 1.5 to 2×!
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅CV models as ensemble of discriminators
✅Improving GAN in limited / large-scale set
✅10k samples matches StyleGAN2 w/ 1.6M
✅Source code / models under MIT license
More: https://bit.ly/3wgUVsr
🤯6🔥2
This media is not supported in your browser
VIEW IN TELEGRAM
🤯Cooperative Driving + AUTOCASTSIM🤯
👉COOPERNAUT: cross-vehicle perception for vision-based cooperative driving
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅UTexas + #Stanford + #Sony #AI
✅LiDAR into compact point-based
✅Network-augmented simulator
✅Source code and models available
More: https://bit.ly/3sr5HLk
👉COOPERNAUT: cross-vehicle perception for vision-based cooperative driving
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅UTexas + #Stanford + #Sony #AI
✅LiDAR into compact point-based
✅Network-augmented simulator
✅Source code and models available
More: https://bit.ly/3sr5HLk
🔥6🤯3🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
💄NeuralHDHair: 3D Neural Hair💄
👉NeuralHDHair: fully automatic system for modeling HD hair from a single image
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅IRHairNet for hair geometric features
✅GrowingNet: 3D hair strands in parallel
✅VIFu: novel voxel-aligned implicit function
✅SOTA in 3D hair modeling from single pic
More: https://bit.ly/38iR0mQ
👉NeuralHDHair: fully automatic system for modeling HD hair from a single image
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅IRHairNet for hair geometric features
✅GrowingNet: 3D hair strands in parallel
✅VIFu: novel voxel-aligned implicit function
✅SOTA in 3D hair modeling from single pic
More: https://bit.ly/38iR0mQ
👍5🥰3❤1
This media is not supported in your browser
VIEW IN TELEGRAM
🐡DyNeRF: Neural 3D Video Synthesis🐡
👉#Meta unveils DyNeRF, novel rendering HQ 3D video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel NeRF-based on temp-latent codes
✅Novel training based on hierarchical step
✅Datasets of time-synch/calibrated clips
✅Attribution-NonCommercial 4.0 Int.
More: https://bit.ly/3MlBRA9
👉#Meta unveils DyNeRF, novel rendering HQ 3D video
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel NeRF-based on temp-latent codes
✅Novel training based on hierarchical step
✅Datasets of time-synch/calibrated clips
✅Attribution-NonCommercial 4.0 Int.
More: https://bit.ly/3MlBRA9
🤯8👍2🔥1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🍋GATO: agent for multiple tasks🍋
👉The same network with the same weights can play Atari, caption pics, chat, and more🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅General-purpose agent, multiple tasks
✅Multi-modal-task, multi-embodiment
✅Inspired by large-scale language model
More: https://bit.ly/3LbBOWb
👉The same network with the same weights can play Atari, caption pics, chat, and more🤯
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅General-purpose agent, multiple tasks
✅Multi-modal-task, multi-embodiment
✅Inspired by large-scale language model
More: https://bit.ly/3LbBOWb
🤯10❤3👍2🔥2
This media is not supported in your browser
VIEW IN TELEGRAM
🪐NeRF powered by keypoints🪐
👉ETHZ + META unveil how to encode relative spatial #3D info via sparse 3D keypoints
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Sparse 3D keypoints for SOTA avatars
✅Unseen subjects from 2/3 views
✅Never-before-seen iPhone captures
More: https://bit.ly/39NQqhe
👉ETHZ + META unveil how to encode relative spatial #3D info via sparse 3D keypoints
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Sparse 3D keypoints for SOTA avatars
✅Unseen subjects from 2/3 views
✅Never-before-seen iPhone captures
More: https://bit.ly/39NQqhe
🤯5🔥2❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🐌Self-Supervised human co-evolution🐌
👉Self-supervised 3D by co-evolution of pose estimator, imitator, and hallucinator
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel self-supervised 3D pose
✅Co-evo of pose, imitator, hallucinator
✅Realist 3D pose and 2D-3D supervision
✅Source code / model under MIT license
More: https://bit.ly/37J5ImL
👉Self-supervised 3D by co-evolution of pose estimator, imitator, and hallucinator
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Novel self-supervised 3D pose
✅Co-evo of pose, imitator, hallucinator
✅Realist 3D pose and 2D-3D supervision
✅Source code / model under MIT license
More: https://bit.ly/37J5ImL
🔥4👍3❤1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🐲 Diff-SDF #3D Rendering 🐲
👉Reconstruction with no complex reg. or priors, using only a per-pixel RGB loss
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Diff-render to optimize geometry/albedo
✅No ad-hoc object mask or supervision
✅Extended sphere tracing algorithm
More: https://bit.ly/3yKWPnI
👉Reconstruction with no complex reg. or priors, using only a per-pixel RGB loss
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Diff-render to optimize geometry/albedo
✅No ad-hoc object mask or supervision
✅Extended sphere tracing algorithm
More: https://bit.ly/3yKWPnI
🤯10👍4🔥2❤1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
👄LVD: new SOTA for #3D human👄
👉Corona et al. unveils a novel 3D human model fitting
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Solution via neural field
✅Not sensitive to initialization
✅SOTA in shape from single pic
✅SOTA in fitting 3D scans
More: https://bit.ly/3Ng4lLr
👉Corona et al. unveils a novel 3D human model fitting
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Solution via neural field
✅Not sensitive to initialization
✅SOTA in shape from single pic
✅SOTA in fitting 3D scans
More: https://bit.ly/3Ng4lLr
👍4🔥2🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🏳️🌈Deep Clustering on ImageNet & Co.🏳️🌈
👉World's first deep nonparametric clustering on large dataset such as ImageNet
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Deep clustering that infers nr. of clusters
✅Loss: amortized inference in mixt-models
✅Deep nonparametric clustering on ImageNet
✅Code and model available under MIT license
More: https://bit.ly/38p62rn
👉World's first deep nonparametric clustering on large dataset such as ImageNet
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Deep clustering that infers nr. of clusters
✅Loss: amortized inference in mixt-models
✅Deep nonparametric clustering on ImageNet
✅Code and model available under MIT license
More: https://bit.ly/38p62rn
🔥9🤯3👍2🤩2
This media is not supported in your browser
VIEW IN TELEGRAM
💥HQ-E²FGVI just released💥💥
👉Flow-Guided Video Inpainting through three trainable modules
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Flow, pixel-prop, content hallucination
✅Three stage-modules, jointly optimized
✅The new SOTA, promising efficiency
✅Code and Models under MIT license
More: https://bit.ly/3Ln0ICj
👉Flow-Guided Video Inpainting through three trainable modules
𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬:
✅Flow, pixel-prop, content hallucination
✅Three stage-modules, jointly optimized
✅The new SOTA, promising efficiency
✅Code and Models under MIT license
More: https://bit.ly/3Ln0ICj
🤯10👍1😱1