This media is not supported in your browser
VIEW IN TELEGRAM
🪬 META's Animated Drawings is out! 🪬
👉#META unveils an easy-to-use method for animating human-like figures drawn by children.
😎Review https://bit.ly/3mGeQQv
😎Paper arxiv.org/pdf/2303.12741.pdf
😎Project fairanimateddrawings.com
👉#META unveils an easy-to-use method for animating human-like figures drawn by children.
😎Review https://bit.ly/3mGeQQv
😎Paper arxiv.org/pdf/2303.12741.pdf
😎Project fairanimateddrawings.com
😱16🥰5👍4👏2🤩2⚡1🔥1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🦕 6D Non-Prehensile Manipulation 🦕
👉#META (+CMU) unveils HACMan, novel 6D non-prehensile manipulation of objects
😎Review https://bit.ly/3NP1jl1
😎Paper arxiv.org/pdf/2305.03942.pdf
😎Project hacman-2023.github.io
👉#META (+CMU) unveils HACMan, novel 6D non-prehensile manipulation of objects
😎Review https://bit.ly/3NP1jl1
😎Paper arxiv.org/pdf/2305.03942.pdf
😎Project hacman-2023.github.io
👍6🔥4🤯3😱1
🦙 Llama-2: the Open-Source "ChatGPT" 🦙
👉GenAI, #Meta unveils Llama-2: a collection of LLMs ranging in scale 7-70B params. Challenging with #chatgpt, but open.
😎Review https://t.ly/bLJgP
😎Paper https://t.ly/AOXru
😎Project https://ai.meta.com/llama
👉GenAI, #Meta unveils Llama-2: a collection of LLMs ranging in scale 7-70B params. Challenging with #chatgpt, but open.
😎Review https://t.ly/bLJgP
😎Paper https://t.ly/AOXru
😎Project https://ai.meta.com/llama
🤯19❤2🔥1💩1
This media is not supported in your browser
VIEW IN TELEGRAM
🐘 Controllable Synthetic Data (extending Image-Net) 🐘
👉#META's PUG, a new generation of interactive environments for representation learning. Extending Image-Net!
😎Review https://t.ly/nCYs0
😎Paper arxiv.org/pdf/2308.03977.pdf
😎Project pug.metademolab.com
😎Code github.com/facebookresearch/PUG
👉#META's PUG, a new generation of interactive environments for representation learning. Extending Image-Net!
😎Review https://t.ly/nCYs0
😎Paper arxiv.org/pdf/2308.03977.pdf
😎Project pug.metademolab.com
😎Code github.com/facebookresearch/PUG
🔥4❤2👍1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
⛺FACET: Fairness in Computer Vision⛺
👉#META AI opens a large, publicly available dataset for classification, detection & segmentation. Potential performance disparities & challenges across sensitive demographic attributes
😎Review https://t.ly/mKn-t
😎Paper arxiv.org/pdf/2309.00035.pdf
😎Dataset https://facet.metademolab.com/
👉#META AI opens a large, publicly available dataset for classification, detection & segmentation. Potential performance disparities & challenges across sensitive demographic attributes
😎Review https://t.ly/mKn-t
😎Paper arxiv.org/pdf/2309.00035.pdf
😎Dataset https://facet.metademolab.com/
🔥10❤6👍4👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥 #META's DINOv2 is now commercial! 🔥🔥
👉Universal features for image classification, instance retrieval, video understanding, depth & semantic segmentation. Now suitable for commercial.
😎Review https://t.ly/LNrGy
😎Paper arxiv.org/pdf/2304.07193.pdf
😎Code github.com/facebookresearch/dinov2
😎Demo dinov2.metademolab.com/
👉Universal features for image classification, instance retrieval, video understanding, depth & semantic segmentation. Now suitable for commercial.
😎Review https://t.ly/LNrGy
😎Paper arxiv.org/pdf/2304.07193.pdf
😎Code github.com/facebookresearch/dinov2
😎Demo dinov2.metademolab.com/
🔥15👍3❤1🤯1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
✌️ Relighted 3D Hands 🤞
👉#META unveils Re:InterHand: a large dataset of relighted 3D interacting hands
😎Review https://t.ly/I1dQk
😎Paper arxiv.org/pdf/2310.17768.pdf
😎Project mks0601.github.io/ReInterHand
😎Data github.com/mks0601/ReInterHand
👉#META unveils Re:InterHand: a large dataset of relighted 3D interacting hands
😎Review https://t.ly/I1dQk
😎Paper arxiv.org/pdf/2310.17768.pdf
😎Project mks0601.github.io/ReInterHand
😎Data github.com/mks0601/ReInterHand
🤯8❤1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🐓 Emu: image edit / video gen. 🐓
👉#Meta the new SOTA in text-to-video generation and instruction-based image editing
👉 Review https://t.ly/PMTBc
👉 Paper (images): https://lnkd.in/eVadH-QS
👉 Project https://lnkd.in/eG8eWUJY
👉 Paper (video): https://lnkd.in/eVadH-QS
👉 Project https://lnkd.in/eu6Zu6gp
👉#Meta the new SOTA in text-to-video generation and instruction-based image editing
👉 Review https://t.ly/PMTBc
👉 Paper (images): https://lnkd.in/eVadH-QS
👉 Project https://lnkd.in/eG8eWUJY
👉 Paper (video): https://lnkd.in/eVadH-QS
👉 Project https://lnkd.in/eu6Zu6gp
🔥8🤯2👍1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🪖RT Humanoid from Head-Mounted Sensors🪖
👉#META (+CMU) announced SimXR, a method for controlling a simulated avatar from info obtained from AR/VR headsets
👉Review https://t.ly/Si2Mp
👉Paper arxiv.org/pdf/2403.06862.pdf
👉Project www.zhengyiluo.com/SimXR/
👉#META (+CMU) announced SimXR, a method for controlling a simulated avatar from info obtained from AR/VR headsets
👉Review https://t.ly/Si2Mp
👉Paper arxiv.org/pdf/2403.06862.pdf
👉Project www.zhengyiluo.com/SimXR/
❤12⚡1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🧤HOT3D Hand/Object Tracking🧤
👉#Meta opens a novel egocentric dataset for 3D hand & object tracking. A new benchmark for vision-based understanding of 3D hand-object interactions. Dataset available 💙
👉Review https://t.ly/cD76F
👉Paper https://lnkd.in/e6_7UNny
👉Data https://lnkd.in/e6P-sQFK
👉#Meta opens a novel egocentric dataset for 3D hand & object tracking. A new benchmark for vision-based understanding of 3D hand-object interactions. Dataset available 💙
👉Review https://t.ly/cD76F
👉Paper https://lnkd.in/e6_7UNny
👉Data https://lnkd.in/e6P-sQFK
🔥9❤3👏3👍2🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥🔥 SAM v2 is out! 🔥🔥
👉#Meta announced SAM 2, the novel unified model for real-time promptable segmentation in images and videos. 6x faster, it's the new SOTA by a large margin. Source Code, Dataset, Models & Demo released under permissive licenses💙
👉Review https://t.ly/oovJZ
👉Paper https://t.ly/sCxMY
👉Demo https://sam2.metademolab.com
👉Project ai.meta.com/blog/segment-anything-2/
👉Models github.com/facebookresearch/segment-anything-2
👉#Meta announced SAM 2, the novel unified model for real-time promptable segmentation in images and videos. 6x faster, it's the new SOTA by a large margin. Source Code, Dataset, Models & Demo released under permissive licenses💙
👉Review https://t.ly/oovJZ
👉Paper https://t.ly/sCxMY
👉Demo https://sam2.metademolab.com
👉Project ai.meta.com/blog/segment-anything-2/
👉Models github.com/facebookresearch/segment-anything-2
🔥27❤10🤯4👍2🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
🐏 EFM3D: 3D Ego-Foundation 🐏
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
🔥9❤2👍2⚡1👏1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 CoTracker3 by #META is out! 🔥
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
❤14🔥3🤯3🍾2👍1😱1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ Universal Relightable Avatars ☀️
👉#Meta unveils URAvatar, photorealistic & relightable avatars from phone scan with unknown illumination. Stunning results!
👉Review https://t.ly/U-ESX
👉Paper arxiv.org/pdf/2410.24223
👉Project junxuan-li.github.io/urgca-website
👉#Meta unveils URAvatar, photorealistic & relightable avatars from phone scan with unknown illumination. Stunning results!
👉Review https://t.ly/U-ESX
👉Paper arxiv.org/pdf/2410.24223
👉Project junxuan-li.github.io/urgca-website
❤11🔥5⚡1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
❤️🔥 Uncommon object in #3D ❤️🔥
👉#META releases uCO3D, a new object-centric dataset for 3D AI. The largest publicly-available collection of HD videos of objects with 3D annotations that ensures full-360◦ coverage. Code & data under CCA 4.0💙
👉Review https://t.ly/Z_tvA
👉Paper https://arxiv.org/pdf/2501.07574
👉Project https://uco3d.github.io/
👉Repo github.com/facebookresearch/uco3d
👉#META releases uCO3D, a new object-centric dataset for 3D AI. The largest publicly-available collection of HD videos of objects with 3D annotations that ensures full-360◦ coverage. Code & data under CCA 4.0💙
👉Review https://t.ly/Z_tvA
👉Paper https://arxiv.org/pdf/2501.07574
👉Project https://uco3d.github.io/
👉Repo github.com/facebookresearch/uco3d
❤11⚡2😍2👍1👏1🤩1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ Relightable Full-Body Avatars ☀️
👉#Meta unveils the first approach ever to jointly model the relightable appearance of the body, face, and hands of drivable avatars.
👉Review https://t.ly/kx9gf
👉Paper arxiv.org/pdf/2501.14726
👉Project neuralbodies.github.io/RFGCA
👉#Meta unveils the first approach ever to jointly model the relightable appearance of the body, face, and hands of drivable avatars.
👉Review https://t.ly/kx9gf
👉Paper arxiv.org/pdf/2501.14726
👉Project neuralbodies.github.io/RFGCA
❤3👍3🔥3⚡1🤯1😢1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 VideoJAM: #META's Video-Model (SOTA) 🔥
👉#META's VideoJAM: the new SOTA (by large margin) in motion coherence for video generation, much better than SORA! A strong motion prior into any video-gen model. Impressive results, no code announced🥲
👉Review https://shorturl.at/id7Bt
👉Paper https://arxiv.org/pdf/2502.02492
👉Project https://hila-chefer.github.io/videojam-paper.github.io/
👉#META's VideoJAM: the new SOTA (by large margin) in motion coherence for video generation, much better than SORA! A strong motion prior into any video-gen model. Impressive results, no code announced🥲
👉Review https://shorturl.at/id7Bt
👉Paper https://arxiv.org/pdf/2502.02492
👉Project https://hila-chefer.github.io/videojam-paper.github.io/
🔥9❤4👍1👏1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🤖 META Human-Robot 🤖
👉#META PARTNR: novel benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. The largest benchmark of its kind: 100,000+ natural language tasks, spanning 60 houses and 5,819 unique objects. Code & Data (🤗) under MIT💙
👉Review https://t.ly/zcN0K
👉Paper arxiv.org/pdf/2411.00081
👉Repo github.com/facebookresearch/partnr-planner
🤗Data huggingface.co/datasets/ai-habitat/partnr_episodes
👉#META PARTNR: novel benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. The largest benchmark of its kind: 100,000+ natural language tasks, spanning 60 houses and 5,819 unique objects. Code & Data (🤗) under MIT💙
👉Review https://t.ly/zcN0K
👉Paper arxiv.org/pdf/2411.00081
👉Repo github.com/facebookresearch/partnr-planner
🤗Data huggingface.co/datasets/ai-habitat/partnr_episodes
🔥8🤩2❤1👍1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🖲️ VGG Transformer 🖲️
👉VGGT by VGG & #META (#CVPR2025) is a feed-forward neural net. that directly infers all key 3D attributes of a scene within seconds. Code released💙
👉Review https://t.ly/WoWXL
👉Paper https://arxiv.org/pdf/2503.11651
👉Project https://vgg-t.github.io/
👉Code github.com/facebookresearch/vggthttps://t.ly/WoWXL
👉VGGT by VGG & #META (#CVPR2025) is a feed-forward neural net. that directly infers all key 3D attributes of a scene within seconds. Code released💙
👉Review https://t.ly/WoWXL
👉Paper https://arxiv.org/pdf/2503.11651
👉Project https://vgg-t.github.io/
👉Code github.com/facebookresearch/vggthttps://t.ly/WoWXL
🤯25👍11🔥6❤2🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦖 DINOv3 is out 🦖
👉#Meta unveils DINOv3! A novel foundation model outperforming the previous SOTAs in computer vision. Code & weights released under DINOv3 License💙
👉Review https://t.ly/-S3ZL
👉Paper https://t.ly/ervOT
👉Project https://lnkd.in/dHFf3esd
👉Repo https://lnkd.in/dPxhDxAq
🤗HF https://lnkd.in/dWGudY2i
👉#Meta unveils DINOv3! A novel foundation model outperforming the previous SOTAs in computer vision. Code & weights released under DINOv3 License💙
👉Review https://t.ly/-S3ZL
👉Paper https://t.ly/ervOT
👉Project https://lnkd.in/dHFf3esd
👉Repo https://lnkd.in/dPxhDxAq
🤗HF https://lnkd.in/dWGudY2i
❤36🔥11👍2😍1🍾1