Machine, are you learning?
841 subscribers
33 photos
5 videos
22 files
102 links
Insights in recent Machine Learning topics, approaches, models and papers.
Interested in collaboration, DM @infatum
Download Telegram
Researchers at Flatiron Institute developed Simulation-Based Inference of Galaxies(SimBIG) with unprecedented precision to predict major cosmological parameters:

https://scitechdaily.com/rewriting-cosmic-calculations-new-ai-unlocks-the-universes-settings/
👍2
🤡7😭2👍1
😁6😭6🤡1
😁12👍1🐳1🙈1
When you outplayed and outsmarted yourself
😁8💯6
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 The code of DynOMo is out 🔥

👉DynOMo is a novel model able to track any point in a dynamic scene over time through 3D reconstruction from monocular video: 2D and 3D point tracking from unposed monocular camera input

👉Review https://t.ly/t5pCf
👉Paper https://lnkd.in/dwhzz4_t
👉Repo github.com/dvl-tum/DynOMo
👉Project https://lnkd.in/dMyku2HW
3
https://arxiv.org/pdf/2501.12948

DeepSeek-R1-Zero is a pure RL model without any supervised data and fine-tuning which achieved paramount reasoning capabilities and was actually trained on a DeepSeek-V3-Base model using GRPO(Group Relative Policy Optimisation) approach. Which is truly an amazing result, that shows how undervalued RL potential is. As I foreseen — the next big leap in AI will be achieved by RL massive adoption and incorporation with pre-trained DL models.

Is RL mass-adoption coming?


#DeepSeek #reinforcementlearning #LLM #GRPO #RL
5
Andrej Karpathy: “I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent).

I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations.

Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL.

Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome.

(Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet”

https://x.com/karpathy/status/1883941452738355376
4
https://thehealthcareinsights.com/swedish-scientists-unveil-the-worlds-first-living-computer-built-from-human-brain-tissue/

Swedish scientists have created the world’s first ‘living computer’ and it is made out of human brain tissue. It is composed of 16 organoids also called clumps of brain cells. Organoids are tiny, self-organized three-dimensional tissue cultures made from stem cells.

This is an alternative (out of the box) solution proposed to the statistical algorithmic AI approach. And maybe, just maybe it can surpass one day silicon based technology with much better efficiency in terms of cost and energy. To harness a billion years of evolution to develop a thinking machine or to harness a byproduct of it to imitate it? The choice is yours

#biology #biologicalAI #ArtificialIntelligence #brain #tissue
ML ops in one pic
😁11
🔥 Стань експертом зі штучного інтелекту з Google!

Google запускає оновлений “Інтенсивний курс з машинного навчання” українською. За 15 годин ви зможете безкоштовно опанувати ШІ і навіть навчитися створювати власні “розумні” програми.

Що всередині курсу:
• 12 модулів із практичними темами
• 9 відео з поясненнями
• Понад 100 вправ та тестів
• Бейджі за успішне проходження
• Реальні приклади та інтерактивні візуалізації

Курс доступний онлайн – вчіться з будь-якого місця і у зручний час. Крім того, можете обирати лише теми, які вас цікавлять, адже модулі не залежать один від одного.

👉 Розпочати навчання за посиланням.
3👍1
Forwarded from Anastasiia;P
⚡️Java-мітап від Levi9: Java x AI — майбутнє твого коду

Як Java-інженеру вписатися в нову реальність, де AI змінює правила розробки?

Ми покажемо на живих прикладах, як інтегрувати AI у продакшн-код, розповімо про AI-агентів, інструменти та типові помилки.

Спікери:

Себастьян Дашнер — Java Champion, автор книги «Architecting Modern Java EE Applications», tech-евангеліст.
👉 AI Tools and Agents That Make You a More Efficient Developer (англійською, з live demo)

Поліна Сергієнко — Senior Java Engineer в Levi9, лідерка команди на проєкті.
👉 Як будувати AI-фічі в Java: кейс, інтеграція, граблі

Буде цікаво Java-розробникам, архітекторам, тімлідам і всім, хто хоче тримати руку пульсі розвитку AI.

🗓 25 червня, онлайн
🔗 Реєстрація вже відкрита: https://meetup.levi9.com.ua/java-event — до зустрічі!