et/acc
279 subscribers
1.14K photos
49 videos
15 files
346 links
Ethiopain Accleration.
Growth unlocks choices.



Discussion at: https://t.me/+Prr0V8-VR0c4NTA0
Download Telegram
Forwarded from ALX Ethiopia
ALX Ethiopia, in partnership with OpenAI and the Africa Fintech Summit, is hosting the Addis AI Forum in Addis Ababa on 17 February 2026 during AU Summit Week.

The forum brings together policymakers, innovators, startups, and educators to discuss Africa’s readiness for an AI-driven future — from talent and infrastructure to opportunity and policy.

Join us in shaping Africa’s AI future. Apply to attend in person or virtually via:
[https://africafintechsummit.com](https://africafintechsummit.com)

#AI #AddisAbaba #ALX #OpenAI #AFTS #AfricaRising
API cost 🤯

(per second api hit)
🤯4
Good engineers have mechanical sympathy

"You don't have to be an engineer to be be a racing driver, but you do have to have Mechanical Sympathy" - Jackie Stewart, racing legend.

Mechanical sympathy is when you use a tool or system with an understanding of how it operates best.
never believed in AI boyfriend / AI girlfriend use cases.

I mocked AI-BF and AI-GF trend… until Ethio Telecom Cloud docs randomly switched to Chinese.

This Valentine’s Day maybe i should be building AI BF/GF for Ethiopian devs, emotional support + English/Amharic translation via AddisAI.
😁31
Advice for beginners in AI: How to learn and what to build

https://www.youtube.com/watch?v=dGqhTpsu_5Y

Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).
1
🔥💰
🔥1
et/acc
🔥💰
at the core of it, the architectures didn't change they way it got tweaked has changed to get us where we are. same transformer DNA with multiple lineages.
et/acc
Nathan Lambert and Sebastian Raschka
when the smartest people say. $2000 subscription are coming.

is a bullish sign for AI.

better invest on $NVDIA and the likes,
the merchants of
Token ingress
Token egress
et/acc
Nathan Lambert
One of his advice univeristy teachers can also give to their students :

If your goal is career momentum as a researcher working on evaluation, the “frontier labs” picking up your evaluation is the real leverage. You don’t need every project to achieve that. But imagine: you’re at a small university with no compute, you find a problem Claude struggles with, and the next Claude model mentions it in a blog post—that’s a career rocket ship. It’s narrow work, but it maximizes impact with minimal compute. You need to anticipate where the models will struggle months in advance.

There’s still a lot of upward mobility if you focus on language models, but from an academic perspective, the transformative impact (like becoming the next Yann LeCun) doesn’t come from caring about every detail of language model development.
1
Forwarded from Rust-Script
Creator of openclaw
How the world moved in 10yrs from Vitalik Buterin era Code is Law to Andrej Karpathy era of Code is Slop.🤣
will to code
Ideas are infinite
ideas may be infinite but Good Ideas with Better Explanation are rare.
🔥1
et/acc
Good Ideas with better explantion
Today we lack good ideas in AI.

Historical leaps like AlexNet (2012) or Transformers used modest compute but unlocked eras; today, even vast resources chase incremental gains.

Novel ideas e.g., self-play among agents or value-aligned superintelligence, could scale dramatically, but investors lack proven alternatives, creating a "research taste" bottleneck

Money abounds, yet without ideas that generalize, AGI remains elusive. The world has capital but needs vision to bet on architectures that adapt, align, and outperform Transformers at fraction of the cost. Trillions in projected future spending risk stalling without new paradigms, as current approaches hit walls in reasoning, alignment, and efficiency.

Ilya Sutskever – We're moving from the age of scaling to the age of research.


Sutskever, in his recent Dwarkesh Podcast interview, describes the "age of scaling" (roughly 2020-2025) as ending, with simple compute increases no longer yielding proportional gains. Models excel on benchmarks yet fail at reliable real-world generalization, like fixing code without introducing new bugs, due to overfitting and poor adaptability. He calls for foundational research in areas like continual learning and dynamic architectures to bridge this gap toward AGI.
1🔥1
Forwarded from Rust-Script
Imagine the CEO of vercel cover your bills when the entire reply section told you to migrate to cloudflare $46k
😁2
What AI Can't do Today.
This media is not supported in your browser
VIEW IN TELEGRAM
Awakenings?
🇨🇳🇺🇸🇪🇺 “Over the past 20 years, China has grown by around 8% a year, the U.S. by 2%, and the EU on average only by 1%.

We must close this gap.”
— Germany's Merz
"It’s why we have git squash and send dignified PRs instead of streaming every compile error to our entire team."