Henok | Neural Nets
1.61K subscribers
233 photos
20 videos
13 files
157 links
Download Telegram
We're a Dr. now 😎, for your information.


Some people don't even do basic background checkπŸ˜‚
😁17πŸ”₯6πŸ‘4πŸ‘€2πŸ‘1
Anyone here? I want to hear your takes on the conf?


Also, say hiπŸ‘‹ , if you're around
❀9πŸ‘3
πŸ’°Nobel prize,

This year's Nobel prize is the probably the first one I really followed, since there were some familiar names. Usually, i just don't go deep into winners works or even check their names.


I thought the Nobel Prize was the perfect award but this year is the year I looked back to see some weird choices etc.



Also, this should be a good reminder that you should quit doing CRUD based stuff etcπŸ˜‚. If your job doesn't require you to come up with some logic either new or old, then you are probably not going to sneak in to the Nobel prize... jk btw
😁9πŸ‘1
HelloooπŸ‘‹,

I was pissed with the recent "AI" hype and didn't want to engage in it for the past few days, anyway what did I miss? Only some cool things please
A recent paper from Apple builds benchmark datasets that may be free from distribution shift and data leakage and thinking through failure test cases for reasoning.

And says there's ZERO evidence that LLM's show any signs of logical reasoning, and rather replicate reasoning steps learned in its training data through pattern recognition. 😳


Paper: https://arxiv.org/abs/2410.05229


This is a highly readable and straightforward paper β€” no math, no LLM or ML pre-reqs needed beyond just the basics (a curious reader without the basic knowledge can also follow through), you can check this blog too
πŸ‘7
Ev Fedorenko's Keynote at COLM

Some key points:
* in the human brain, language system is distinct from modules that activate when doing knowledge/cognition related things like maths. It can lead to activations of the others, but it's not necessary.

* LLMs are interesting to study as a in-silica model organism of language, as their representations somewhat resemble those of humans. And it looked like better models had better alignment, but curious if that continues.

*Baby LLMs


https://youtu.be/8xS7tjy92Ws?feature=shared
❀1
Some insights from this videoπŸ‘†
The next thing about these days LLMs is application and integrations. Here is one by Anthropic called Computer Use

It allows Claude to control your computer screen based on a prompt and take actions on your behalf to act on your computer. You can even let it control your app and play games too, sounds fun.

It works by taking static screenshots that are constantly sent back to the API in real-time

Then Clade can move your cursor, click, and type text



https://x.com/rowancheung/status/1848743700702130474?t=qMcwmDJTT4UFkox4pOmx-g&s=09
Forwarded from Samson Endale πŸ‡ͺπŸ‡Ή
Focus on solving real problems. Don't waste time on hypothetical issues. If a problem needs validation, it's probably not important. Tackle the biggest challenges first.
- Samson Endale
Samson Endale πŸ‡ͺπŸ‡Ή
validation
Don't listen to SamπŸ˜‚, our work(ML) is nothing without validation set.
😁12
Huggingface datasetπŸ€— stats, and guess what, both are not even my datasets, I worked on the Aya one but for the other one, I just made it instruction style and nothing much. But seems like people are using them, has anyone come across these two datasets on hf?

Aya and Amharic QA
πŸ‘12πŸ‘3❀1πŸ”₯1
Aya Expanse from Cohere for AI, it beats most open source models like Llama and Gemma in same parameters comparison. It can also support speech and image this time.

You can try it here: https://huggingface.co/spaces/CohereForAI/aya_expanse
πŸ”₯3