Forwarded from Techኢት (Alpha)
Techኢት Podcast S02E14 is out! 🎙
We had an amazing conversation with Dr. Wondwossen Mulugeta , VP for Institutional Development at Addis Ababa University, on the latest episode of the Techኢት Podcast. 🚀 We discussed his background, Natural Language Processing—especially in local languages—trending AI topics, and institutional matters at Addis Ababa University. Dr. Wondwossen also advised young students in tech. Don’t miss this fascinating discussion about Ethiopia's tech future! 🌍✨
🎧 Watch now: YouTube link
@Techinethio
#TechPodcast #AI #DigitalTransformation #NLP #EthiopianTech #Innovation #MachineLearning
We had an amazing conversation with Dr. Wondwossen Mulugeta , VP for Institutional Development at Addis Ababa University, on the latest episode of the Techኢት Podcast. 🚀 We discussed his background, Natural Language Processing—especially in local languages—trending AI topics, and institutional matters at Addis Ababa University. Dr. Wondwossen also advised young students in tech. Don’t miss this fascinating discussion about Ethiopia's tech future! 🌍✨
🎧 Watch now: YouTube link
@Techinethio
#TechPodcast #AI #DigitalTransformation #NLP #EthiopianTech #Innovation #MachineLearning
YouTube
TechኢትPodcast S02 Ep14 [Guest: Dr. Wondwossen Mulugeta ]
Dr. Wondwossen Mulugeta, Vice President for Institutional Development at Addis Ababa University, joins us on this episode of the Techኢት Podcast. With over 21 years in academic leadership, he's driving the university's digital transformation. As an Assistant…
❤🔥6
This media is not supported in your browser
VIEW IN TELEGRAM
🇪🇹 ChatGPT can now speak Ethiopian languages.
Obviously it's not the best but check out when she speaks Amharic, Afan Oromo, and Tigrigna.
The accent is a bit funny but I'm surprised😁
Obviously it's not the best but check out when she speaks Amharic, Afan Oromo, and Tigrigna.
The accent is a bit funny but I'm surprised😁
⚡20🤣9👍3🔥2😁1🏆1
If you have worked on vision language models, you have probably faced visual perception as they tend to focus on the wrong areas.
These people in this paper, Attention Prompting on Image for Large Vision-Language Models, basically introduce Attention Prompting on Image (API), where applying a mask to the image enhances LVLMs' visual understanding. This is very cleaver way and simple way.
Full details to find code, paper,demo, huggingface spaces can be found here on their website.
https://yu-rp.github.io/api-prompting/
These people in this paper, Attention Prompting on Image for Large Vision-Language Models, basically introduce Attention Prompting on Image (API), where applying a mask to the image enhances LVLMs' visual understanding. This is very cleaver way and simple way.
Full details to find code, paper,demo, huggingface spaces can be found here on their website.
https://yu-rp.github.io/api-prompting/
🔥9
Forwarded from Dagmawi Babi
Great paper choices @Dagmawi_Babi but some if not most are not easy for beginners, e.g the scaling law paper or any sort of scaling laws for LLMs are only useful for people who want to pretrain models, even that is not enough tbh. If people want just to have a high-level understanding of how LLMs work, they should just read blogs etc.
⚡3
💰Nobel prize,
This year's Nobel prize is the probably the first one I really followed, since there were some familiar names. Usually, i just don't go deep into winners works or even check their names.
I thought the Nobel Prize was the perfect award but this year is the year I looked back to see some weird choices etc.
Also, this should be a good reminder that you should quit doing CRUD based stuff etc😂. If your job doesn't require you to come up with some logic either new or old, then you are probably not going to sneak in to the Nobel prize... jk btw
This year's Nobel prize is the probably the first one I really followed, since there were some familiar names. Usually, i just don't go deep into winners works or even check their names.
I thought the Nobel Prize was the perfect award but this year is the year I looked back to see some weird choices etc.
Also, this should be a good reminder that you should quit doing CRUD based stuff etc😂. If your job doesn't require you to come up with some logic either new or old, then you are probably not going to sneak in to the Nobel prize... jk btw
😁9👍1
Hellooo👋,
I was pissed with the recent "AI" hype and didn't want to engage in it for the past few days, anyway what did I miss? Only some cool things please
I was pissed with the recent "AI" hype and didn't want to engage in it for the past few days, anyway what did I miss? Only some cool things please
A recent paper from Apple builds benchmark datasets that may be free from distribution shift and data leakage and thinking through failure test cases for reasoning.
And says there's ZERO evidence that LLM's show any signs of logical reasoning, and rather replicate reasoning steps learned in its training data through pattern recognition. 😳
Paper: https://arxiv.org/abs/2410.05229
This is a highly readable and straightforward paper — no math, no LLM or ML pre-reqs needed beyond just the basics (a curious reader without the basic knowledge can also follow through), you can check this blog too
And says there's ZERO evidence that LLM's show any signs of logical reasoning, and rather replicate reasoning steps learned in its training data through pattern recognition. 😳
Paper: https://arxiv.org/abs/2410.05229
This is a highly readable and straightforward paper — no math, no LLM or ML pre-reqs needed beyond just the basics (a curious reader without the basic knowledge can also follow through), you can check this blog too
👍7
Ev Fedorenko's Keynote at COLM
Some key points:
* in the human brain, language system is distinct from modules that activate when doing knowledge/cognition related things like maths. It can lead to activations of the others, but it's not necessary.
* LLMs are interesting to study as a in-silica model organism of language, as their representations somewhat resemble those of humans. And it looked like better models had better alignment, but curious if that continues.
*Baby LLMs
https://youtu.be/8xS7tjy92Ws?feature=shared
Some key points:
* in the human brain, language system is distinct from modules that activate when doing knowledge/cognition related things like maths. It can lead to activations of the others, but it's not necessary.
* LLMs are interesting to study as a in-silica model organism of language, as their representations somewhat resemble those of humans. And it looked like better models had better alignment, but curious if that continues.
*Baby LLMs
https://youtu.be/8xS7tjy92Ws?feature=shared
❤1
Forwarded from Frectonz
YouTube
Nobel Prize in Physics (& Computer Science?) - Computerphile
The 2024 Nobel Prize in Physics is awarded to John Hopfield and Geoffrey Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks”. This video features Juan Garrahan, Phil Moriarty and Mike Pound...…