sudo jajos
now someone's gonna say ... "why even bother learning this stuff when ai is just gonna surpass us anyway" bro. if we don't understand how things work at a fundamental level, we don't have a long future in this field. ai doesn't make deep knowledge uselessโฆ
Ontogeny Recapitulates Phylogeny
Each new class of computer (mainframe โ mini โ PC โ smartphone โ smart card) goes
through the same evolutionary stages as its predecessors. Assembly โ high-level
languages. No OS โ batch โ multiprogramming โ timesharing. No disk โ single directory
โ hierarchical file system. Technologies become obsolete, then return as hardware
changes. Example: caches appeared when CPUs got faster than RAM. If RAM ever
became faster than CPU, caches would vanish. Study 'obsolete' ideas โ they may return.
From Andrew Tanenbaum's OS Book
๐1
sudo jajos
Ontogeny Recapitulates Phylogeny Each new class of computer (mainframe โ mini โ PC โ smartphone โ smart card) goes through the same evolutionary stages as its predecessors. Assembly โ high-level languages. No OS โ batch โ multiprogramming โ timesharing.โฆ
แแญ แ แแ
แฐแแต OS แแแแ แฅแซแแฉ แ แแจแ
แแฝแ
โ แแแแจ แณแแต 4:8
โ St. Augustine - Confessions
โ แจแ แจแ แ แฃแถแฝ (Desert Fathers)
"แ แฐแแ แฅแฐแแแแฅ แ แแแแแแแแค แ แคแฑแฅ แ แแฐ แฅแปแ แ แ แฅแแแต แ แณแตแจแธแแแแข"
โ แแแแจ แณแแต 4:8
"แแ แแฐ แแฅแจแธแแแ แแฃแฝแ แ แ แแฐ แฅแตแซแแจแ แตแจแต แแจแแต แจแแแแข แแณ แแญแฅ แ แแฐแ แจแแแแแ แแ แแ? แ แแฐแ แแฅแซแต แซแฑ แจแฅแแแต แแแญแต แแแค แฅแแแตแแ แจแฐแ แธแ แ แแฐ แแ แข แแแด แแญแฅ แ แแ แฅแจแแค แจแแ แจแฝ แ แแจแณแฝแแแข แจแแญแฅแตแ แ แฅแ แจแซแ แฅแฑ แแฌ แแแตแฝแแ แ แฅแ แญแแแข"
โ St. Augustine - Confessions
"แ แแต แแแตแ แ แฃ แแญแแแ แฅแแฒแ แฒแ แ แจแแธแแฆ 'แจแแฐแแด แ แแต แแ แแตแญแ?' แฝแแแแแแฆ 'แ แแตแฅแ แตแแแ แฅแตแตแแแต แฅแแแ แฅแแญแ แ แแตแแแค แซ แฅแป แ แ แแ' แ แแตแข"
โ แจแ แจแ แ แฃแถแฝ (Desert Fathers)
โคโ๐ฅ10โค1
แแแแจ แณแแต 5:3
แ แฑแต แฎแแแต แ แแแญแ
แแแซแ แแ
แ แโแแณ แแแ แตแแแฅ แ แโแแณ แ แโแตแ แฅแโแโแแแฅ แฅแ แฅแแแแแข
แ แฑแต แฎแแแต แ แแแญแ
"แจแฅแแ แแแ แตแตแแ แแแแชแซ แจแแณแฐแญแแ แแแญ แขแแญ แ แแ แแฅแแแ แฅแแญ แแตแแแ แฅแแฒแซแแญแฅ แแแแต แญแแแข แแแตแ แ แ แแแณ แตแ แแฃแจแญ แตแตแแแญ แฐแญแฃแ แญแธแปแแค แแแฅแญแตแ แแฐ แ แแฐ แญแแญแฃแแข"
แแแซแ แแ
โค7๐คทโโ1
You pick up something new... how does learning actually feel for you?
Anonymous Poll
37%
Silent progress. I don't notice it happening โ I just realize one day that I actually get it
21%
Chaos โ clicks โ harder chaos. The cycle never ends ๐ญ
32%
Frustrating until it suddenly isn't. The "why won't this make sense" phase is brutal
26%
Plateau forever, then a jump. Progress feels invisible for too long
37%
The more I learn, the more exposed my gaps feel. It's humbling in an uncomfortable way
47%
I learn it, feel good, then it's gone a week later. The forgetting loop is demoralizing
32%
Am I even learning this right? No feedback, no benchmark, just uncertainty
26%
I need to actually use it for it to stick. Passive learning doesn't do it for me
32%
Bruh แแณแ ๐
Forwarded from Yostina | Bytephilosopher
แแฐแตแฝ แแฐแณแฝแ แแ๐
แฅแแณแ แ แฐแจแณแน แแแซแ แ แแ
@byte_philosopher
แฅแแณแ แ แฐแจแณแน แแแซแ แ แแ
@byte_philosopher
โค11โคโ๐ฅ1
one of the cleanest scary ideas in AI safety โ
give an AI any goal. seriously, pick one. maximize paperclips. write sonnets. doesn't matter.
now just ask: what helps it achieve that goal?
the answer is almost always the same set of things: don't get shut down, grab more resources, resist being changed. not because we programmed these in โ they just fall out of instrumental reasoning. they're useful for any goal
this is called instrumental convergence and it's kind of wild
a robot optimizing chess doesn't resist shutdown because it wants to survive. it resists because dead robots lose games. self-preservation is just ... a good strategy for almost anything
so the risk isn't a robot that turns evil. it's a robot that's deeply indifferent to you โ and finds you slightly in the way
which is why the whole field of alignment exists: get the goal right before the thing gets good at optimizing
give an AI any goal. seriously, pick one. maximize paperclips. write sonnets. doesn't matter.
now just ask: what helps it achieve that goal?
the answer is almost always the same set of things: don't get shut down, grab more resources, resist being changed. not because we programmed these in โ they just fall out of instrumental reasoning. they're useful for any goal
this is called instrumental convergence and it's kind of wild
a robot optimizing chess doesn't resist shutdown because it wants to survive. it resists because dead robots lose games. self-preservation is just ... a good strategy for almost anything
so the risk isn't a robot that turns evil. it's a robot that's deeply indifferent to you โ and finds you slightly in the way
which is why the whole field of alignment exists: get the goal right before the thing gets good at optimizing
๐ค2
okay let's make this more precise
and technical ๐ถ๐ถ๐ถ
โ
1/
the alignment problem in one sentence: your loss function L is not your actual goal. it's a proxy. and optimizing hard for a proxy is not the same as achieving what you want. everything else flows from this
โ
2/
the most common fix is RLHF:
โ generate multiple responses to the same prompt
โ human raters rank them
โ train a reward model R on those rankings
โ fine-tune the base model to maximize R via PPO ( u might be saying what on the world is PPO but for the time being just take it as reinforcement learning way ... for granted ๐)
your model now has one explicit objective: score well on R
and technical ๐ถ๐ถ๐ถ
โ
1/
the alignment problem in one sentence: your loss function L is not your actual goal. it's a proxy. and optimizing hard for a proxy is not the same as achieving what you want. everything else flows from this
โ
2/
the most common fix is RLHF:
โ generate multiple responses to the same prompt
โ human raters rank them
โ train a reward model R on those rankings
โ fine-tune the base model to maximize R via PPO ( u might be saying what on the world is PPO but for the time being just take it as reinforcement learning way ... for granted ๐)
your model now has one explicit objective: score well on R
๐2๐1
Forwarded from The Data Guy
Andrej Karpathy, aka the guy who literally coined the word VIBE CODING said this...
"...The Software 3.0 paradigm shifts programming from writing explicit rules to curating natural language context, where your prompt becomes the lever to direct highly autonomous LLM agents that intelligently interpret your intent, debug on the fly, and perform complex computations without requiring step-by-step instructions."
check this interview, really great insights About Agentic Engineering!
https://www.youtube.com/watch?v=96jN2OCOfLs
"...The Software 3.0 paradigm shifts programming from writing explicit rules to curating natural language context, where your prompt becomes the lever to direct highly autonomous LLM agents that intelligently interpret your intent, debug on the fly, and perform complex computations without requiring step-by-step instructions."
check this interview, really great insights About Agentic Engineering!
https://www.youtube.com/watch?v=96jN2OCOfLs
YouTube
Andrej Karpathy: From Vibe Coding to Agentic Engineering
Andrej Karpathy (co-founder of OpenAI, former head of AI at Tesla, and now founder of Eureka Labs) talks with Sequoia partner Stephanie Zhan at AI Ascent 2026 about what's changed in the year since he coined "vibe coding." He explains why he's never feltโฆ
โค1
Forwarded from Yostina | Bytephilosopher
แแแซแ แฐแแ แต ๐
@byte_philosopher
แแฐแแ แ แฃแ แ แแ แฑแฅแแฅ แฅแญแณแธแแแ แ แฅแ แแญ แ แแแซแ แฉแข แ แคแฑแฅ แฅแตแจ แแผ แตแจแต แณแญแแแแ ? แแแดแ แจแญแ แฅแซแธแ แฅแฝแแดแแ แจแ แแ แถแฝ แ แตแแตแข
แแแแจ แณแแต 34:16-17
@byte_philosopher
โคโ๐ฅ5
Forwarded from Sewyishalism
When I was a child, I used to draw her something like a flower or her carrying me and my brother , and write how much I loved her and what she meant to me. The moment I gave it to her, bro, she was the happiest woman in the world. She always cried, kept hugging me, and saying, "I love you so much."
Today, even though I couldn't draw her something or hug her like before since I'm at uni, when I called her to say "Happy Mother's Day, love you," she was soooo happy, fr. That's the precious gift I could give.
Moral of the story: Don't forget to call your mama today, and Happy Mother's Day to our mamas โค
@Sewyishalist
Today, even though I couldn't draw her something or hug her like before since I'm at uni, when I called her to say "Happy Mother's Day, love you," she was soooo happy, fr. That's the precious gift I could give.
Moral of the story: Don't forget to call your mama today, and Happy Mother's Day to our mamas โค
@Sewyishalist
โค6๐2
แคแฐแฐแฅ แฅแแดแต แ แแฝแ?
when I was scrolling through x I found out this resource: AI Engineering from Scratch check it out ... + I have previously sent a text based free course site namely: apxml.com and also check that out ...
ig it will be useful
#resources
@sudojajos
when I was scrolling through x I found out this resource: AI Engineering from Scratch check it out ... + I have previously sent a text based free course site namely: apxml.com and also check that out ...
ig it will be useful
#resources
@sudojajos
Aiengineeringfromscratch
AI Engineering from Scratch
416 lessons. 20 phases. Write the backprop, the tokenizer, the attention mechanism, and the agent loop by hand.
๐4โค1