AI with Papers - Artificial Intelligence & Deep Learning
15K subscribers
95 photos
236 videos
11 files
1.26K links
All the AI with papers. Every day fresh updates on Deep Learning, Machine Learning, and Computer Vision (with Papers).

Curated by Alessandro Ferrari | https://www.linkedin.com/in/visionarynet/
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
🫐 Blendify: #Python + Blender 🫐

πŸ‘‰Lightweight Python framework that provides a high-level API for creating & rendering scenes with #Blender. It simplifies data augmentation & synthesis. Source Code releasedπŸ’™

πŸ‘‰Review https://t.ly/l0crA
πŸ‘‰Paper https://arxiv.org/pdf/2410.17858
πŸ‘‰Code https://virtualhumans.mpi-inf.mpg.de/blendify/
🀩13πŸ‘4πŸ”₯4❀2πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ”₯ D-FINE: new SOTA Detector πŸ”₯

πŸ‘‰D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR model. New SOTA on MS COCO with additional data. Code & models available πŸ’™

πŸ‘‰Review https://t.ly/aw9fN
πŸ‘‰Paper https://arxiv.org/pdf/2410.13842
πŸ‘‰Code https://github.com/Peterande/D-FINE
❀16πŸ‘3πŸ‘1🀯1
This media is not supported in your browser
VIEW IN TELEGRAM
🍜 REM: Segment What You Describe 🍜

πŸ‘‰REM is a framework for segmenting concepts in video that can be described via LLM. Suitable for rare & non-object dynamic concepts, such as waves, smoke, etc. Code & Data announced πŸ’™

πŸ‘‰Review https://t.ly/OyVtV
πŸ‘‰Paper arxiv.org/pdf/2410.23287
πŸ‘‰Project https://miccooper9.github.io/projects/ReferEverything/
πŸ”₯18❀4πŸ‘3🀩2🀯1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
β˜€οΈ Universal Relightable Avatars β˜€οΈ

πŸ‘‰#Meta unveils URAvatar, photorealistic & relightable avatars from phone scan with unknown illumination. Stunning results!

πŸ‘‰Review https://t.ly/U-ESX
πŸ‘‰Paper arxiv.org/pdf/2410.24223
πŸ‘‰Project junxuan-li.github.io/urgca-website
❀11πŸ”₯5⚑1πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
🏣 CityGaussianV2: Large-Scale City 🏣

πŸ‘‰A novel approach for large-scale scene reconstruction that addresses critical challenges related to geometric accuracy and efficiency: 10x compression, 25% faster & -50% memory! Source code releasedπŸ’™

πŸ‘‰Review https://t.ly/Xgn59
πŸ‘‰Paper arxiv.org/pdf/2411.00771
πŸ‘‰Project dekuliutesla.github.io/CityGaussianV2/
πŸ‘‰Code github.com/DekuLiuTesla/CityGaussian
πŸ‘15πŸ”₯9❀2πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ’ͺ Muscles in Time Dataset πŸ’ͺ

πŸ‘‰Muscles in Time (MinT) is a large-scale synthetic muscle activation dataset. MinT contains 9+ hours of simulation data covering 227 subjects and 402 simulated muscle strands. Code & Dataset available soon πŸ’™

πŸ‘‰Review https://t.ly/108g6
πŸ‘‰Paper arxiv.org/pdf/2411.00128
πŸ‘‰Project davidschneider.ai/mint
πŸ‘‰Code github.com/simplexsigil/MusclesInTime
πŸ”₯8❀3πŸ‘3
This media is not supported in your browser
VIEW IN TELEGRAM
🧠 Single Neuron Reconstruction 🧠

πŸ‘‰SIAT unveils NeuroFly, a framework for large-scale single neuron reconstruction. Formulating neuron reconstruction task as a 3-stage streamlined workflow: automatic segmentation - connection - manual proofreading. Bridging computer vision and neuroscience πŸ’™

πŸ‘‰Review https://t.ly/Y5Xu0
πŸ‘‰Paper https://arxiv.org/pdf/2411.04715
πŸ‘‰Repo github.com/beanli161514/neurofly
❀4πŸ”₯1🀩1
This media is not supported in your browser
VIEW IN TELEGRAM
🫠 X-Portrait 2: SOTA(?) Portrait Animation 🫠

πŸ‘‰ByteDance unveils a preview of X-Portrait2, the new SOTA expression encoder model that implicitly encodes every minuscule expressions from the input by training it on large-scale datasets. Impressive results but no paper & code announced.

πŸ‘‰Review https://t.ly/8Owh9 [UPDATE]
πŸ‘‰Paper ?
πŸ‘‰Project byteaigc.github.io/X-Portrait2/
πŸ‘‰Repo ?
πŸ”₯13🀯5πŸ‘4❀1πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
❄️Don’t Look Twice: ViT by RLT❄️

πŸ‘‰CMU unveils RLT: speeding up the video transformers inspired by run-length encoding for data compression. Speed the training up and reducing the token count by up to 80%! Source Code announced πŸ’™

πŸ‘‰Review https://t.ly/ccSwN
πŸ‘‰Paper https://lnkd.in/d6VXur_q
πŸ‘‰Project https://lnkd.in/d4tXwM5T
πŸ‘‰Repo TBA
πŸ”₯9πŸ‘3❀1🀩1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ”SeedEdit: foundational T2IπŸ”

πŸ‘‰ByteDance unveils a novel T2I foundational model capable of delivering stable, high-aesthetic image edits which maintain image quality through unlimited rounds of editing instructions. No code announced but a Demo is onlineπŸ’™

πŸ‘‰Review https://t.ly/hPlnN
πŸ‘‰Paper https://arxiv.org/pdf/2411.06686
πŸ‘‰Project team.doubao.com/en/special/seededit
πŸ€—Demo https://huggingface.co/spaces/ByteDance/SeedEdit-APP
πŸ”₯10❀6🀩1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ”₯ 4 NanoSeconds inference πŸ”₯

πŸ‘‰LogicTreeNet: convolutional differentiable logic gate net. with logic gate tree kernels: Computer Vision into differentiable LGNs. Up to 6100% smaller than SOTA, inference in 4 NANOsecs!

πŸ‘‰Review https://t.ly/GflOW
πŸ‘‰Paper https://lnkd.in/dAZQr3dW
πŸ‘‰Full clip https://lnkd.in/dvDJ3j-u
πŸ”₯29🀯12πŸ‘1🀩1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ›₯️ Global Tracklet Association MOT πŸ›₯️

πŸ‘‰A novel universal, model-agnostic method designed to refine and enhance tracklet association for single-camera MOT. Suitable for datasets such as SportsMOT, SoccerNet & similar. Source code releasedπŸ’™

πŸ‘‰Review https://t.ly/gk-yh
πŸ‘‰Paper https://lnkd.in/dvXQVKFw
πŸ‘‰Repo https://lnkd.in/dEJqiyWs
πŸ‘10πŸ”₯4❀2
This media is not supported in your browser
VIEW IN TELEGRAM
🧢 MagicQuill: super-easy Diffusion Editing 🧢

πŸ‘‰MagicQuill is a novel system designed to support users in smart editing of images. Robust UI/UX (e.g., inserting/erasing objects, colors, etc.) under a multimodal LLM to anticipate user intentions in real time. Code & Demos released πŸ’™

πŸ‘‰Review https://t.ly/hJyLa
πŸ‘‰Paper https://arxiv.org/pdf/2411.09703
πŸ‘‰Project https://magicquill.art/demo/
πŸ‘‰Repo https://github.com/magic-quill/magicquill
πŸ‘‰Demo https://huggingface.co/spaces/AI4Editing/MagicQuill
🀩7πŸ”₯4❀3πŸ‘2
This media is not supported in your browser
VIEW IN TELEGRAM
🧰 EchoMimicV2: Semi-body Human 🧰

πŸ‘‰Alipay (ANT Group) unveils EchoMimicV2, the novel SOTA half-body human animation via APD-Harmonization. See clip with audio (ZH/ENG). Code & Demo announcedπŸ’™

πŸ‘‰Review https://t.ly/enLxJ
πŸ‘‰Paper arxiv.org/pdf/2411.10061
πŸ‘‰Project antgroup.github.io/ai/echomimic_v2/
πŸ‘‰Repo-v2 github.com/antgroup/echomimic_v2
πŸ‘‰Repo-v1 https://github.com/antgroup/echomimic
❀5πŸ”₯5πŸ‘2
This media is not supported in your browser
VIEW IN TELEGRAM
βš”οΈSAMurai: SAM for Trackingβš”οΈ

πŸ‘‰UWA unveils SAMURAI, an enhanced adaptation of SAM 2 specifically designed for visual object tracking. New SOTA! Code under Apache 2.0πŸ’™

πŸ‘‰Review https://t.ly/yGU0P
πŸ‘‰Paper https://arxiv.org/pdf/2411.11922
πŸ‘‰Repo https://github.com/yangchris11/samurai
πŸ‘‰Project https://yangchris11.github.io/samurai/
πŸ”₯20❀6😍2⚑1πŸ‘1🀯1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ¦–Dino-X: Unified Obj-Centric LVMπŸ¦–

πŸ‘‰Unified vision model for Open-World Detection, Segmentation, Phrase Grounding, Visual Counting, Pose, Prompt-Free Detection/Recognition, Dense Caption, & more. Demo & API announced πŸ’™

πŸ‘‰Review https://t.ly/CSQon
πŸ‘‰Paper https://lnkd.in/dc44ZM8v
πŸ‘‰Project https://lnkd.in/dehKJVvC
πŸ‘‰Repo https://lnkd.in/df8Kb6iz
πŸ”₯12🀯8❀4πŸ‘3🀩1
🌎All Languages Matter: LMMs vs. 100 Lang.🌎

πŸ‘‰ALM-Bench aims to assess the next generation of massively multilingual multimodal models in a standardized way, pushing the boundaries of LMMs towards better cultural understanding and inclusivity. Code & Dataset πŸ’™

πŸ‘‰Review https://t.ly/VsoJB
πŸ‘‰Paper https://lnkd.in/ddVVZfi2
πŸ‘‰Project https://lnkd.in/dpssaeRq
πŸ‘‰Code https://lnkd.in/dnbaJJE4
πŸ‘‰Dataset https://lnkd.in/drw-_95v
❀3πŸ‘1πŸ‘1🀩1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ¦™ EdgeCape: SOTA Agnostic Pose πŸ¦™

πŸ‘‰EdgeCap: new SOTA in Category-Agnostic Pose Estimation (CAPE): finding keypoints across diverse object categories using only one or a few annotated support images. Source code releasedπŸ’™

πŸ‘‰Review https://t.ly/4TpAs
πŸ‘‰Paper https://arxiv.org/pdf/2411.16665
πŸ‘‰Project https://orhir.github.io/edge_cape/
πŸ‘‰Code https://github.com/orhir/EdgeCape
πŸ”₯10πŸ‘1🀯1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ›Ÿ StableAnimator: ID-aware Humans πŸ›Ÿ

πŸ‘‰StableAnimator: first e2e ID-preserving diffusion for HQ videos without any post-processing. Input: single image + sequence of poses. Insane results!

πŸ‘‰Review https://t.ly/JDtL3
πŸ‘‰Paper https://arxiv.org/pdf/2411.17697
πŸ‘‰Project francis-rings.github.io/StableAnimator/
πŸ‘‰Code github.com/Francis-Rings/StableAnimator
πŸ‘12❀3🀯2πŸ”₯1