This media is not supported in your browser
VIEW IN TELEGRAM
GPT-4 is now able to write its own code and execute python scripts
๐ฅ38๐ค10๐ฑ2๐2โค1๐1
detroit become human incoming
https://techfundingnews.com/openai-backs-1x-technologies-in-23-5m-funding-to-commercialise-humanoid-robotics/
https://techfundingnews.com/openai-backs-1x-technologies-in-23-5m-funding-to-commercialise-humanoid-robotics/
โค19๐3
It seems like StabilityAI has found a new founder. Let's hope they won't turn into another ClosedAI.
ps.sorry for not posting for a long time, my job has been keeping me very busy lately
ps.sorry for not posting for a long time, my job has been keeping me very busy lately
๐ค15๐ฅฐ5๐3๐จ2
This media is not supported in your browser
VIEW IN TELEGRAM
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
this is not a recorded video - it is fully rendered from a neural model
arxiv
page
this is not a recorded video - it is fully rendered from a neural model
arxiv
page
๐คฏ73๐6๐พ4๐1๐ค1
This media is not supported in your browser
VIEW IN TELEGRAM
Stability AI announces Stable Diffusion XL beta for API and DreamStudio
๐19
it's not required for you to use depth maps to fix hands in ControlNet anymore, it can be done with pose2img in ControlNet 1.1
๐คฏ30๐6๐ฅ3๐2๐ฅฐ1
This media is not supported in your browser
VIEW IN TELEGRAM
Harmony-Rhythm Disentanglement audio remixer plugin
Sadly, only for macOS
Sadly, only for macOS
๐ฅ20๐2๐1๐คฎ1
โGenerative Agents: Interactive Simulacra of Human Behaviorโ
The paper aims to introduce a new approach to creating believable proxies of human behavior that can be used in various interactive applications such as immersive environments and prototyping tools . The authors propose a generative agent that can imitate human actions and interactions in a virtual environment . These agents can form opinions, socialize, and plan activities , making them useful for various applications such as immersive environments and prototyping tools . The evaluation shows that the generative agents produce believable individual and emergent social behaviors, such as forming relationships and coordinating group activities.
arxiv
project page
The paper aims to introduce a new approach to creating believable proxies of human behavior that can be used in various interactive applications such as immersive environments and prototyping tools . The authors propose a generative agent that can imitate human actions and interactions in a virtual environment . These agents can form opinions, socialize, and plan activities , making them useful for various applications such as immersive environments and prototyping tools . The evaluation shows that the generative agents produce believable individual and emergent social behaviors, such as forming relationships and coordinating group activities.
arxiv
project page
๐11๐ฑ8
This media is not supported in your browser
VIEW IN TELEGRAM
BabyAGI as a plugin for chatGPT.
BabyAGI is an open-source platform that draws inspiration from the cognitive development of human infants to facilitate research in various fields, including reinforcement learning, language learning, and cognitive development. It is a simplified AI-powered task management system that leverages OpenAI and Pinecone APIs to create, prioritize, and execute tasks.
BabyAGI is an open-source platform that draws inspiration from the cognitive development of human infants to facilitate research in various fields, including reinforcement learning, language learning, and cognitive development. It is a simplified AI-powered task management system that leverages OpenAI and Pinecone APIs to create, prioritize, and execute tasks.
๐12๐4๐ฅฐ1
A follower provided some valuable reading material.
Jascha Sohl-Dickstein has written a blog post titled โThe hot mess theory of AI misalignment: More intelligent agents behave less coherentlyโ . In this post, Jascha Sohl-Dickstein argues that misalignment risk is not about expecting a system to โinflexiblyโ or โmonomanicallyโ pursuing a simple objective but about expecting systems to pursue objectives at all. The objectives donโt need to be simple or easy to understand.
Jascha Sohl-Dickstein has written a blog post titled โThe hot mess theory of AI misalignment: More intelligent agents behave less coherentlyโ . In this post, Jascha Sohl-Dickstein argues that misalignment risk is not about expecting a system to โinflexiblyโ or โmonomanicallyโ pursuing a simple objective but about expecting systems to pursue objectives at all. The objectives donโt need to be simple or easy to understand.
๐12๐ค5
Stability AI released their language model
Stability AI
Stability AI Launches the First of its Stable LM Suite of Language Models โ Stability AI
Stability AI's open-source Alpha version of StableLM showcases the power of small, efficient models that can generate high-performing text and code locally on personal devices. Discover how StableLM can drive innovation and open up new economic opportunitiesโฆ
๐12๐ฅ4๐ค4๐ค2โค1๐คฉ1