mogn
The nerve of these people!
Let's say I update the source code of the software one line at a time. At what point does the project stop belonging to the college? 🤔😂
Forwarded from et/acc
$4.15B invested in open-source generates $8.8T of value for companies (aka $1 invested in open-source = $2,000 of value created)
- Companies would need to spend 3.5 times more on software than they currently do if OSS did not exist
(Harvard research on open-source)
- Companies would need to spend 3.5 times more on software than they currently do if OSS did not exist
(Harvard research on open-source)
The man who doesn't read has no advantage over the man who cannot read.
– Mark Twain
😁1💯1
This media is not supported in your browser
VIEW IN TELEGRAM
I let Claude play with a Rubik's Cube
⚡1
mogn
https://github.blog/ai-and-ml/generative-ai/what-are-ai-agents-and-why-do-they-matter/
Anthropic
Building Effective AI Agents
Discover how Anthropic approaches the development of reliable AI agents. Learn about our research on agent capabilities, safety considerations, and technical framework for building trustworthy AI.
mogn
https://youtu.be/3oCFHE9x0As?feature=shared
Who knew ChatGPT started as a reddit chatbot 😂
How accurate is this? 😂
https://hoodmaps.com/addis-ababa-neighborhood-map
https://hoodmaps.com/addis-ababa-neighborhood-map
Hoodmaps
Addis Ababa Neighborhood Map
Addis Ababa Neighborhood Map: Normie 2.0, Wannabe 2, Spoiled rich kids, Rickety museum, families live here, You'll get stoned, Sleep here, Wannabe hippies, Big Uni with 1million kids, Rich folks..., Richie rich Neighborhood Map of Addis Ababa by locals.…
The holy grail of mechanistic interpretability is:
- Fully understanding how a neural network makes its decisions—step by step, neuron by neuron—kind of like being able to read a computer program that the AI "wrote" inside itself.
But why would we need to fully understand how a neural network makes its decisions? 🤔
- Control: It helps us make models do what we want and only what we want.
What things do we want or not want?
We don't want bad behaviors. And if we understand what the model is doing internally, we can detect and fix bad behaviors (like bias, deception, or unexpected actions).
- Fully understanding how a neural network makes its decisions—step by step, neuron by neuron—kind of like being able to read a computer program that the AI "wrote" inside itself.
But why would we need to fully understand how a neural network makes its decisions? 🤔
- Control: It helps us make models do what we want and only what we want.
What things do we want or not want?
We don't want bad behaviors. And if we understand what the model is doing internally, we can detect and fix bad behaviors (like bias, deception, or unexpected actions).
mogn
The holy grail of mechanistic interpretability is: - Fully understanding how a neural network makes its decisions—step by step, neuron by neuron—kind of like being able to read a computer program that the AI "wrote" inside itself. But why would we need to…
The question then becomes, "Do we have a universally agreed upon set of bad behaviors?"
I don't think so, but if that's the case who gets to define these "bad behaviors"?
I don't think so, but if that's the case who gets to define these "bad behaviors"?