This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ#Google just announced "TensorStore"๐ฅ
๐Novel open-source C++ / #Python library for storage/manipulation of high-dim data
๐Review https://bit.ly/3DLwbha
๐Project https://bit.ly/3C4T2TR
๐Code github.com/google/tensorstore
๐Novel open-source C++ / #Python library for storage/manipulation of high-dim data
๐Review https://bit.ly/3DLwbha
๐Project https://bit.ly/3C4T2TR
๐Code github.com/google/tensorstore
๐ฅ14๐2
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฆ Motion Transformer for #selfdriving ๐ฆ
๐The 1st place solution for 2022 #waymo "motion prediction" challenge
๐Review https://bit.ly/3f8G4LD
๐Paper arxiv.org/pdf/2209.10033.pdf
๐Code github.com/sshaoshuai/MTR
๐The 1st place solution for 2022 #waymo "motion prediction" challenge
๐Review https://bit.ly/3f8G4LD
๐Paper arxiv.org/pdf/2209.10033.pdf
๐Code github.com/sshaoshuai/MTR
๐ฅ17๐3
This media is not supported in your browser
VIEW IN TELEGRAM
๐น Image Synthesis @160+ FPS! ๐น
๐Super-fast, 3D-Aware Image Synthesis with Sparse Voxels -> up to 167 FPS!
๐Review https://bit.ly/3r3ZNij
๐Paper arxiv.org/pdf/2206.07695.pdf
๐Project katjaschwarz.github.io/voxgraf
๐Super-fast, 3D-Aware Image Synthesis with Sparse Voxels -> up to 167 FPS!
๐Review https://bit.ly/3r3ZNij
๐Paper arxiv.org/pdf/2206.07695.pdf
๐Project katjaschwarz.github.io/voxgraf
๐3๐คฏ2๐ฅ1๐ฏ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ #Nvidia GET3D: #3D generative #AI ๐
๐AI-based Textured 3D meshes with complex topology, rich geometry & hi-fi textures
๐Review https://bit.ly/3SgnT5h
๐Code github.com/nv-tlabs/GET3D
๐Project nv-tlabs.github.io/GET3D/
๐Paper nv-tlabs.github.io/GET3D/assets/paper.pdf
๐AI-based Textured 3D meshes with complex topology, rich geometry & hi-fi textures
๐Review https://bit.ly/3SgnT5h
๐Code github.com/nv-tlabs/GET3D
๐Project nv-tlabs.github.io/GET3D/
๐Paper nv-tlabs.github.io/GET3D/assets/paper.pdf
โคโ๐ฅ7๐5
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ๐ฅ IDE-3D: source code is out! ๐ฅ๐ฅ
๐Novel, photorealistic, 3D-aware facial generator: source code just released!
๐Review https://bit.ly/3BNrO2C
๐Project mrtornado24.github.io/IDE-3D/
๐Code github.com/MrTornado24/IDE-3D
๐Paper arxiv.org/pdf/2205.15517.pdf
๐Novel, photorealistic, 3D-aware facial generator: source code just released!
๐Review https://bit.ly/3BNrO2C
๐Project mrtornado24.github.io/IDE-3D/
๐Code github.com/MrTornado24/IDE-3D
๐Paper arxiv.org/pdf/2205.15517.pdf
๐คฏ8๐5๐ฅ3๐คฉ3
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅDiffusion Model of Neural Checkpoints๐ฅ
๐Conditional diffusion model on Millions of checkpoints of a given task/architecture ๐คฏ
๐Review https://bit.ly/3SBR4Qb
๐Project www.wpeebles.com/Gpt
๐Code github.com/wpeebles/G.pt
๐Paper arxiv.org/pdf/2209.12892.pdf
๐Conditional diffusion model on Millions of checkpoints of a given task/architecture ๐คฏ
๐Review https://bit.ly/3SBR4Qb
๐Project www.wpeebles.com/Gpt
๐Code github.com/wpeebles/G.pt
๐Paper arxiv.org/pdf/2209.12892.pdf
๐คฏ5โค1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ Semantic VISOR dataset is out! ๐ฅ
๐Segmenting hands / active objects in egocentric video (millions masks)
๐Review https://bit.ly/3LOBLBv
๐Project epic-kitchens.github.io/VISOR/
๐Paper arxiv.org/pdf/2209.13064.pdf
๐Segmenting hands / active objects in egocentric video (millions masks)
๐Review https://bit.ly/3LOBLBv
๐Project epic-kitchens.github.io/VISOR/
๐Paper arxiv.org/pdf/2209.13064.pdf
๐คฏ8๐ฅ4๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ๐ฅ Olympic Games in 2028? ๐ฅ๐ฅ
๐ In a few years, the fastest runner on earth will not be a human ๐ฅถ
๐Review https://bit.ly/3Rme3O3
๐ In a few years, the fastest runner on earth will not be a human ๐ฅถ
๐Review https://bit.ly/3Rme3O3
๐ฑ8๐3๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ SOTA ALERT: new Text-to-Video #AI ๐ฅ
๐#META unveils a novel Text-to-Video (T2V) generation #AI
๐Review https://bit.ly/3E1ZDzG
๐Project https://makeavideo.studio/
๐Paper makeavideo.studio/Make-A-Video.pdf
๐#META unveils a novel Text-to-Video (T2V) generation #AI
๐Review https://bit.ly/3E1ZDzG
๐Project https://makeavideo.studio/
๐Paper makeavideo.studio/Make-A-Video.pdf
๐คฏ9๐6๐ฑ1๐ฉ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅDreamFusion: Text-to-3D via Diffusion๐ฅ
๐DeepDream-like procedure to create #3D assets just from a given text
๐Review https://bit.ly/3BYY5nu
๐Paper arxiv.org/pdf/2209.14988.pdf
๐Project dreamfusion3d.github.io/gallery.html
๐DeepDream-like procedure to create #3D assets just from a given text
๐Review https://bit.ly/3BYY5nu
๐Paper arxiv.org/pdf/2209.14988.pdf
๐Project dreamfusion3d.github.io/gallery.html
๐คฏ12๐5๐ฉ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐งช Light Field Neural Rendering ๐งช
๐Two-stage transformer capable of non-Lambertian effects (reflection, refraction, translucency)
๐Review https://bit.ly/3CpIFdm
๐Paper arxiv.org/pdf/2112.09687.pdf
๐Project light-field-neural-rendering.github.io
๐Code github.com/google-research/google-research/tree/master/light_field_neural_rendering
๐Two-stage transformer capable of non-Lambertian effects (reflection, refraction, translucency)
๐Review https://bit.ly/3CpIFdm
๐Paper arxiv.org/pdf/2112.09687.pdf
๐Project light-field-neural-rendering.github.io
๐Code github.com/google-research/google-research/tree/master/light_field_neural_rendering
๐คฏ14๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฆฉPhenaki: Text-to(LOOONG)Video generation๐ฆฉ
๐Phenaki is an #AI capable of realistic long video synthesis, given a sequence of textual open prompts
๐Review https://bit.ly/3RwUvXx
๐Project phenaki.video/index.h
๐Paper openreview.net/pdf?id=vOEXS39nOF
๐Phenaki is an #AI capable of realistic long video synthesis, given a sequence of textual open prompts
๐Review https://bit.ly/3RwUvXx
๐Project phenaki.video/index.h
๐Paper openreview.net/pdf?id=vOEXS39nOF
๐ฅ7โค3๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ VToonify: Neural Portrait Style Transfer ๐ฅ
๐VToonify for portrait style transfer. Powered by DualStyleGAN backbone, now with #stablediffusion!
๐Review https://bit.ly/3M9wgNP
๐Demo https://t.co/8gXzF3IrpB
๐Paper arxiv.org/pdf/2209.11224.pdf
๐Project mmlab-ntu.com/project/vtoonify
๐Code github.com/williamyang1991/VToonify
๐VToonify for portrait style transfer. Powered by DualStyleGAN backbone, now with #stablediffusion!
๐Review https://bit.ly/3M9wgNP
๐Demo https://t.co/8gXzF3IrpB
๐Paper arxiv.org/pdf/2209.11224.pdf
๐Project mmlab-ntu.com/project/vtoonify
๐Code github.com/williamyang1991/VToonify
๐22โค3๐คฏ2๐ฅ1๐1๐ฉ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ข Stable Diffusion for #Pokemon ๐ข
๐Fine-tuning the stable diffusion to create a text-to-pokemon generation model
๐Review https://bit.ly/3C9qBTw
๐Tutorial https://lambdalabs.com/blog/how-to-fine-tune-stable-diffusion-how-we-made-the-text-to-pokemon-model-at-lambda/
๐Fine-tuning the stable diffusion to create a text-to-pokemon generation model
๐Review https://bit.ly/3C9qBTw
๐Tutorial https://lambdalabs.com/blog/how-to-fine-tune-stable-diffusion-how-we-made-the-text-to-pokemon-model-at-lambda/
โค8๐4
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ Imagen Video by #Google. SICK! ๐ฅ
๐Novel text-conditional video generation via cascade of video diffusion models ๐คฏ
๐Review https://bit.ly/3SH2TVH
๐Project imagen.research.google/video/
๐Paper imagen.research.google/video/paper.pdf
๐Novel text-conditional video generation via cascade of video diffusion models ๐คฏ
๐Review https://bit.ly/3SH2TVH
๐Project imagen.research.google/video/
๐Paper imagen.research.google/video/paper.pdf
๐คฏ20๐ฅ7๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ Human MDM: source code is out! ๐ฅ
๐A classifier-free diffusion-based generative model for human motion domain
๐Review https://bit.ly/3rFhR2G
๐Project guytevet.github.io/mdm-page
๐Paper arxiv.org/pdf/2209.14916.pdf
๐Code github.com/GuyTevet/motion-diffusion-model
๐A classifier-free diffusion-based generative model for human motion domain
๐Review https://bit.ly/3rFhR2G
๐Project guytevet.github.io/mdm-page
๐Paper arxiv.org/pdf/2209.14916.pdf
๐Code github.com/GuyTevet/motion-diffusion-model
๐ฅ6๐1
This media is not supported in your browser
VIEW IN TELEGRAM
โ๏ธSOTA ALERT! Particles Tracking โ๏ธ
๐The new SOTA in video particles tracking. "Old school" taste, with neural flavor ๐งก
๐Review https://bit.ly/3CaU5Ai
๐Project particle-video-revisited.github.io/
๐Paper arxiv.org/pdf/2204.04153.pdf
๐Code github.com/aharley/pips
๐The new SOTA in video particles tracking. "Old school" taste, with neural flavor ๐งก
๐Review https://bit.ly/3CaU5Ai
๐Project particle-video-revisited.github.io/
๐Paper arxiv.org/pdf/2204.04153.pdf
๐Code github.com/aharley/pips
๐7๐ฅฐ4๐ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ #AIwithPapers: we are 4,500+! ๐ฅ
๐๐ Someone put the smiling ๐ฉ under a few recent posts. But I still love you! ๐๐
๐ Invite your friends -> https://t.me/AI_DeepLearning
๐๐ Someone put the smiling ๐ฉ under a few recent posts. But I still love you! ๐๐
๐ Invite your friends -> https://t.me/AI_DeepLearning
โค18๐ฉ7๐ฅ5๐3๐ฅฐ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ Long Video via Transformers ๐
๐TECO is a vector-quantized latent dynamics prediction for long video
๐Review https://bit.ly/3Ch0tWD
๐Project wilson1yan.github.io/teco/
๐Paper arxiv.org/pdf/2210.02396.pdf
๐Code github.com/wilson1yan/teco
๐TECO is a vector-quantized latent dynamics prediction for long video
๐Review https://bit.ly/3Ch0tWD
๐Project wilson1yan.github.io/teco/
๐Paper arxiv.org/pdf/2210.02396.pdf
๐Code github.com/wilson1yan/teco
๐7
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅSIMPLI: ligh novel-view synthesis๐ฅ
๐Lightweight novel-view synthesis by #Samsung for arbitrary forward-facing scenes
๐Review https://bit.ly/3CivSYZ
๐Project samsunglabs.github.io/MLI
๐Code github.com/SamsungLabs/MLI
๐Paper samsunglabs.github.io/MLI/paper/paper.pdf
๐Lightweight novel-view synthesis by #Samsung for arbitrary forward-facing scenes
๐Review https://bit.ly/3CivSYZ
๐Project samsunglabs.github.io/MLI
๐Code github.com/SamsungLabs/MLI
๐Paper samsunglabs.github.io/MLI/paper/paper.pdf
๐8