sudo jajos
309 subscribers
111 photos
14 videos
92 links
$ sudo jajos --verbose

Junior AI Engineer@iCogLabs|Oriental Orthodox|Just fascinated about anything fascinating ๐Ÿ˜

Will share my thoughts and journey here
Download Telegram
I have been using opus 4.6 (and sometimes codex) and u know I cant transition from that to gemini 3 flash ๐Ÿ‘€ it was just rate limit ... till it resets i also rest and learn other new staff ๐Ÿ˜
๐Ÿคฃ2๐Ÿ˜1
I am currently inside a project pitch at the company I work at, and now the project being pitched is maths ai, it is an ai project for ATP( automated theorem proving) and it is quite interesting u know ๐Ÿ˜Œ๐Ÿ˜Œ

Before that music ai was pitched and it is for music generating ai models - what I learned there was ( jargon alert ๐Ÿ˜) that diffusion based models can be seen as special schrodinger bridge problem where the end distribution is a normal gaussian ... I didn't know that till now ... (I will in the future post what exactly I jargoned ๐Ÿ˜ now ๐Ÿ˜)
โค2๐Ÿ”ฅ1
sudo jajos
I am currently inside a project pitch at the company I work at, and now the project being pitched is maths ai, it is an ai project for ATP( automated theorem proving) and it is quite interesting u know ๐Ÿ˜Œ๐Ÿ˜Œ Before that music ai was pitched and it is for musicโ€ฆ
If u don't know the jargons ... Don't worry the ones I mentioned here will also be included in the ai playground being built and u will get to know them when the time comes but if there is anyone who can't rest and is curios(which I am for example) I can write a simple post today evening or tomorrow ... ๐Ÿ˜
Forwarded from Dagmawi Babi
Moltbook got acquired by Meta (along with the team - which aren't really engineers) It's good to start vibe-coding even if you're not in tech. You'd be surprised by how many creative ideas you can come up with :)
๐Ÿ˜1
Afternoon Guys .... ๐Ÿ™Œ
sudo jajos
https://youtu.be/_Ux13UEqIYo?si=hHAXBrWQLdrbSzZz
But really ai in the aggregate level is not seeming good ...
https://blog.mlreview.com/making-sense-of-the-bias-variance-trade-off-in-deep-reinforcement-learning-79cf1e83d565

This is a very nice blog about the bias-variance tradeoff in RL ... I think most of u would be familiar with bias-variance tradeoff in supervised ML ... but yeah in RL there is more to it ... I invite anyone interested to read it ...
๐Ÿ‘1๐Ÿ”ฅ1
Forwarded from GTB
I donโ€™t think I even need to say who did this, right?


Babi is one of the kindest, most supportive, and most humble people I have ever been blessed to meet in my life.


I have a deep respect for a man who can sincerely say from his heart that everything he has ... and everything he will ever have ... is simply God working through him. That kind of humility is rare, and you truly embody it.

You are one of those people, man.


All I can say is this ... may God fill every empty place in your life. May He continue to show you even more goodness, blessings, and purpose along your path.


Just like you always do, today you inspired another person to try to be better. You reminded me of the kind of person I want to become.


May God protect you, guide you, and keep blessing your life. And I pray that one day I can be like you ... so that I can help someone else the way you have helped and inspired me.
Forwarded from Sirack's Universe
โค2
https://youtu.be/byQmJ9x0RWA

These AI safety videos really are freaking me out ... ๐Ÿ˜ฑ
Forwarded from ChaosCreator
Hey guys, I hope you didn't miss me too much! Today won't have too many exciting new things, but I think it can be considered a good take.
Today I was watching a Crash Course video about how to navigate the internet, and it mentioned two things: vertical reading and lateral reading. I want to explain these concepts and also ramble about what I was thinking at the time.
In short, vertical reading is when you read a piece of content directly from top to bottom. Lateral reading is when you first read about the source from other sources โ€” to understand its intentions and agenda โ€” and then read the original content with that context in mind.
Although this is a good habit to have, I don't think you or I, who are being bombarded with new information left and right, can realistically do this in-depth research for everything we hear. Just today I watched things about the central limit theorem, a method for building curves from moving straight lines, diffusion algorithms, CNNs, and graph neural networks โ€” a lot of things just from morning to evening.
So what I think we should do, or at least the most efficient approach to lateral reading, is to only apply it when you encounter a contradictory claim. Until then, hold your current view as authoritative. But when you do see a contradiction, instead of trying to make the new claim fit your existing belief, first start by uncovering the hidden assumptions behind your own belief through lateral reading โ€” before evaluating the new one. moreover i wander is an algorithm of this kind will bet the efficiency of A* and greedy algorithm, i asked chatgpt and it said it looks similar to lazy algorithms such as dynamic A** so if you guy know anything about this algorithm, or this topic feel free to comment๐Ÿ™‚โ€โ†”๏ธ๐Ÿ™‚โ€โ†”๏ธ๐Ÿ™‚โ€โ†”๏ธ๐Ÿ™‚โ€โ†”๏ธ
โค2
Morning fam ๐Ÿ™Œ๐Ÿ™Œ๐Ÿ™Œ
Forwarded from Gracious โœ
"แ‹จแˆแ‰ตแ‹ˆแ‹ฑแ‰ต แˆฐแ‹Žแ‰ฝ แˆแŒฝแˆž แŠ แˆแ‰…แˆฑแˆˆแ‰ตแกแก

แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แŠ แˆ›แŠ‘แŠคแˆ แŠ แˆแˆ‹แŠซแ‰ฝแŠ•แกแก

แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แˆ˜แ‹ตแŠƒแŠ’แ‰ณแ‰ฝแŠ• แŠขแ‹จแˆฑแˆตแกแก


แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แŠ•แŒ‰แˆฃแ‰ฝแŠ• แŠญแˆญแˆตแ‰ถแˆตแกแก

แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แ‹ˆแ‹จแ‹ แŒปแ‹ตแ‰ƒแŠ• แŠจแ‹•แŠ•แŒจแ‰ต แŠ แ‹ˆแˆจแ‹ฑแ‰ตแกแก

แˆฅแŒ‹แ‹แŠ•แˆ แˆˆแˆ˜แŒˆแАแ‹ แŠจแˆญแ‰ค แ‹จแˆšแ‰ฃแˆ แ‹จแŒฃแˆแŒ  แˆฝแ‰ฑแŠ•แŠ“ แŠ•แŒนแˆ• แ‰ แแ‰ณแŠ• แŠ แˆ˜แŒกแกแก

แ‰…แ‹ฑแˆต แŠฅแŒแ‹šแŠ แ‰ฅแˆ”แˆญ แ‰…แ‹ฑแˆต แŠƒแ‹ซแˆ แ‰…แ‹ฑแˆต แˆ•แ‹ซแ‹ แ‹จแˆ›แ‹ญแˆžแ‰ต แ‹จแˆ›แ‹ญแˆˆแ‹ˆแŒฅแกแก

แŠจแ‹ตแŠ•แŒแˆ แˆ›แˆญแ‹ซแˆ แ‹จแ‰ฐแ‹ˆแˆˆแ‹ฐ แŠ แ‰คแ‰ฑ แ‹ญแ‰…แˆญ แ‰ แˆˆแŠ•แกแก

แ‰…แ‹ฑแˆต แŠฅแŒแ‹šแŠ แ‰ฅแˆ”แˆญ แ‰…แ‹ฑแˆต แŠƒแ‹ซแˆ แ‰…แ‹ฑแˆต แ‹จแˆ›แ‹ญแˆžแ‰ต แ‹จแˆ›แ‹ญแˆˆแ‹ˆแŒฅ แ‰ แ‹ฎแˆญแ‹ณแŠ–แˆต แ‹จแ‰ฐแŒ แˆ˜แ‰€ แ‰ แˆ˜แˆตแ‰€แˆแˆ แ‹จแ‰ฐแˆฐแ‰€แˆˆ แŠ แ‰คแ‰ฑ แ‹ญแ‰…แˆญ แ‰ แˆˆแŠ•แกแก"

แ‰…แ‹ฑแˆต แ‹ฎแˆแŠ•แˆต แŠ แˆแ‹ˆแˆญแ‰…
๐Ÿ˜ญ9
https://github.com/danielgatis/rembg?tab=readme-ov-file

this is an awesome cli tool - u don't have to search for background remover ai free ๐Ÿ˜ just use this using
rembg i path.png // or open the http server using rembg s --host 0.0.0.0 --port 7000 --log_level info
๐Ÿ‘1๐Ÿ‘€1
So today I was reading this paper: https://arxiv.org/html/2511.22570v1 -> It is the deepseekmath-v2 and u know it is insane, it scored Gold level on IMO 2025, while my brain here still glitches understanding the architecture ๐Ÿ˜

It was a good read -> obviously used claude for studying deeply and I made the compressed version of the whole paper in the telegraph below ... anyone interested can read ๐Ÿ™‚โ€โ†•๏ธ๐Ÿ™‚โ€โ†•๏ธ๐Ÿ™‚โ€โ†•๏ธ

https://telegra.ph/DEEPSEEKMath-V2-03-13
๐Ÿ‘3