Can AI play hide-and-seek?
Let's kick off with something called Reinforcement Learning. Here a cool demo from OpenAI back in 2019. It's a multi-agent interaction on a simulated hide-and-seek environment.
Full article https://openai.com/index/emergent-tool-use/
https://youtu.be/kopoLzvh5jY?feature=shared
Let's kick off with something called Reinforcement Learning. Here a cool demo from OpenAI back in 2019. It's a multi-agent interaction on a simulated hide-and-seek environment.
Full article https://openai.com/index/emergent-tool-use/
https://youtu.be/kopoLzvh5jY?feature=shared
Openai
Emergent tool use from multi-agent interaction
Weβve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in our new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, someβ¦
π₯3
Did you know OpenAI has a Superalignment Team that works on teaching powerful AI to be supervised by less powerful models? Fun fact both co-led of the team Ilya Sutskever and Jan Leike has left OpenAI. Could it be that Sam wanted it to be uncontrolledπ?
AI word of the dayβοΈ
AI Alignment: aims to make AI systems behave in line with human intentions and values.
Read more here: https://openai.com/superalignment/
AI word of the dayβοΈ
AI Alignment: aims to make AI systems behave in line with human intentions and values.
Read more here: https://openai.com/superalignment/
π5
This media is not supported in your browser
VIEW IN TELEGRAM
ChatGPT got more rizz than @beka_cru and basically y'all
π30π’1π€£1
Kevin and Nando are notable figures in AI, specializing in Bayesian machine learning, neural networks, RL, and more. They have praised this book highly. Although I haven't read it yet, I trust it's excellent.
π I've met Kevin once, in-person, and he's such an amazing guy. He's also the author of the famous book series on Probabilistic Machine Learning.
π I've met Kevin once, in-person, and he's such an amazing guy. He's also the author of the famous book series on Probabilistic Machine Learning.
Google
Kevin Murphy
Research Scientist, Google - Cited by 122,433 - Artificial Intelligence - Machine Learning - Computer Vision - Natural Language Processing
π₯2
What If We Run Out of Data to Train LLMs? π€
Ever wondered if we could actually run out of data to train our massive language models? π
Tomorrow, I'll be hosting Niklas Muennighoff(Research Engineer at Contextual AI) to present an amazing paper called Scaling Data-Constrained Language Models
Add this to your calendar: https://cohere.com/events/niklas-muennighoff-2024-06
Ever wondered if we could actually run out of data to train our massive language models? π
Tomorrow, I'll be hosting Niklas Muennighoff(Research Engineer at Contextual AI) to present an amazing paper called Scaling Data-Constrained Language Models
Add this to your calendar: https://cohere.com/events/niklas-muennighoff-2024-06
π₯5
Chapa is looking for AI/ML Engineer - R&D
Apply here if you are interested: https://apply.workable.com/chapa/j/F70D15208F/
Apply here if you are interested: https://apply.workable.com/chapa/j/F70D15208F/
Workable
Chapa
Chapa is dedicated to empowering emerging companies and developers by offering a wide array of services. These include payment gateways, payment applications and equipment, APIs, and more. The company aims to ensure that its clients have access to seamles
β€5
When @baydis uses it it's a feature, when OpenAI provides the models it's a product
π4
Attention is all you need (maybe, at least for now).
This is a paper that introduced the transformer architecture, a back bone for LLMs. To understand the architecture there is no better resource than a blog by Jay Alammar. It's the best blog ever about transformers.
The Illustrated Transformer
https://jalammar.github.io/illustrated-transformer/
This is a paper that introduced the transformer architecture, a back bone for LLMs. To understand the architecture there is no better resource than a blog by Jay Alammar. It's the best blog ever about transformers.
The Illustrated Transformer
https://jalammar.github.io/illustrated-transformer/
β4β‘1
πOur paper is out
I was part of this paper called "CVQA - Culturally-diverse Multilingual Visual Question Answering Benchmark"
Go ahead and check out the paper on arxiv
https://arxiv.org/abs/2406.05967
I was part of this paper called "CVQA - Culturally-diverse Multilingual Visual Question Answering Benchmark"
Go ahead and check out the paper on arxiv
https://arxiv.org/abs/2406.05967
π₯18π2