Forwarded from Dagmawi Babi
ScholArxiv is an open-source aesthetic and minimal app that allows users to search, read, bookmark, share, download and view summaries of academic papers from the arXiv repository.
Download here
β’ https://t.me/Dagmawi_Babi/22592
Features
β’ Read Papers: Read entire papers in detail within the app.
β’ Bookmarks: Save your favorite papers for quick access.
β’ Paper Summaries: View brief paper summaries.
β’ Search Papers: Search for papers using keywords, titles, authors and abstract.
β’ Download and Share Papers: Download papers for offline reading or you can share document links to others.
Star the repo or contribute
β’ github.com/dagmawibabi/ScholArxiv
Thank you and hope you enjoy it!
#MyProjects #ScholArxiv
@Dagmawi_Babi
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯3β€2π2
Can we persuade LLMs to make them believe that the Earth is flat?
The answer isyes !
"The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation"
The title is also nice.
https://arxiv.org/abs/2312.09085
The answer is
"The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation"
The title is also nice.
https://arxiv.org/abs/2312.09085
arXiv.org
The Earth is Flat because...: Investigating LLMs' Belief...
Large language models (LLMs) encapsulate vast amounts of knowledge but still remain vulnerable to external misinformation. Existing research mainly studied this susceptibility behavior in a...
Forwarded from Birhan Nega
α°αα½ AI α°αα½α αα°α« ααα ααααα’
α₯αα α₯ααα AI α¨αα αα α°αα½ AI α¨ααα αα α°αα½α αα°α«αα’ basic computer skill α ααααα αα« ααα°α« α°α αα³α ααα’ α«α΅α ααα ααααα΅ α«α΅αααα α’
α₯αα α₯ααα AI α¨αα αα α°αα½ AI α¨ααα αα α°αα½α αα°α«αα’ basic computer skill α ααααα αα« ααα°α« α°α αα³α ααα’ α«α΅α ααα ααααα΅ α«α΅αααα α’
Become an AI Engineer for the Ethiopian FinTech Sector!
The Kifiya AI Mastery Training Program offers an intensive, project-based learning approach to make you job-ready for the Ethiopian FinTech sector and beyond!
In 3 months we will prepare you for top-notch job offers with hands-on projects, and youβll gain practical skills for real-world applications. This program is fully funded and FREE!
This training is offered by Kifiya Financial Technology and powered by 10 Academy.
Application Open Until 16 AUG, 2024
Our curriculum covers key technology areas:
- Generative AI Engineering
- Machine Learning Engineering
- Data Engineering
Women, people with disabilities ( long-term physical, mental, intellectual, or sensory impairments), refugees, returnees, and IDPs are highly encouraged to apply.
Apply Today: apply.10academy.org
Learn more about the program at https://10academy.org/kifiya/learn-more
The Kifiya AI Mastery Training Program offers an intensive, project-based learning approach to make you job-ready for the Ethiopian FinTech sector and beyond!
In 3 months we will prepare you for top-notch job offers with hands-on projects, and youβll gain practical skills for real-world applications. This program is fully funded and FREE!
This training is offered by Kifiya Financial Technology and powered by 10 Academy.
Application Open Until 16 AUG, 2024
Our curriculum covers key technology areas:
- Generative AI Engineering
- Machine Learning Engineering
- Data Engineering
Women, people with disabilities ( long-term physical, mental, intellectual, or sensory impairments), refugees, returnees, and IDPs are highly encouraged to apply.
Apply Today: apply.10academy.org
Learn more about the program at https://10academy.org/kifiya/learn-more
10academy.org
10 Academy
Get job-ready for a global-level AI, Web3 and Generative AI job in 6 months with 10 Academy's community-rich AI-enabled training.
π₯6
Some people were asking for some courses. I think this is a very good one
This media is not supported in your browser
VIEW IN TELEGRAM
Let's replace our CEOs with AI lol
π3π1
Reading is the most important and valuable (iβd argue even more than writing, running experiments etc) aspect of doing research and now we have gen ai βreadβ and βsummariseβ scientific workβ¦ we are sleepwalking into mediocrity
Source
Source
π3β‘1π1π€―1
This media is not supported in your browser
VIEW IN TELEGRAM
Video Generation π₯π₯π₯
Flux with Lora + Gen-3 Alpha image-to-video.
Flux with Lora + Gen-3 Alpha image-to-video.
π₯5β‘2β€βπ₯1π1π1
Flux 1
I was playing around with flux 1 to see how the samplers which are the methods used to reverse the diffusion process during image generation, affect the generated images in flux1. I tried a few and here is the ones from Euler(Red shirt) and DDIM(the one with Green yellow red shirt).
Oh, incase you are wondering what sampling is:
Here is the code if you want to play around, I think it's also on replicate and you can give it a try without running a code.
I was playing around with flux 1 to see how the samplers which are the methods used to reverse the diffusion process during image generation, affect the generated images in flux1. I tried a few and here is the ones from Euler(Red shirt) and DDIM(the one with Green yellow red shirt).
Oh, incase you are wondering what sampling is:
To produce an image, Stable Diffusion first generates a completely random image in the latent space. The noise predictor then estimates the noise of the image. The predicted noise is subtracted from the image. This process is repeated a dozen times. In the end, you get a clean image.
This denoising process is called sampling because Stable Diffusion generates a new sample image in each step. The method used in sampling is called the sampler or sampling method.
Here is the code if you want to play around, I think it's also on replicate and you can give it a try without running a code.
π₯3β€2π1
How to write an okay research paper.
https://x.com/srush_nlp/status/1825526786513379567?t=iTwEJRkOtw3rIX5y9uLXlA&s=19
https://x.com/srush_nlp/status/1825526786513379567?t=iTwEJRkOtw3rIX5y9uLXlA&s=19
X (formerly Twitter)
Sasha Rush (@srush_nlp) on X
New Video: How to write an okay research paper.
Reviewers all agree! @srush_nlp's papers are "reasonably structured" and "somewhat clear, despite other flaws".
https://t.co/nCjYsDI5Jf
Reviewers all agree! @srush_nlp's papers are "reasonably structured" and "somewhat clear, despite other flaws".
https://t.co/nCjYsDI5Jf
π₯3
I got many requests and questions about research and ML in the past few days and today I want to make a group to work on something. Probably this could be your first research work. To make the best out of it, I'll take 5-6 people as core members and incase we need more people we'll add some.
If you got any interesting ideas or maybe if you are curios about AI research, come join us.
The target is to make a cool work and hopefully publish a paper.
I'll try to reply for every DM and we will see if you are a great match for this.βοΈ
If you got any interesting ideas or maybe if you are curios about AI research, come join us.
The target is to make a cool work and hopefully publish a paper.
I'll try to reply for every DM and we will see if you are a great match for this.βοΈ
β€9π₯8
To Code, or Not To Code? Exploring Impact of Code in Pre-training
So apparently adding some code data in your pretraining data increases reasoning and improves non-code tasksπ€. I've seen this in a work from Neurips 2023 led by Niklas Muennighoff and now this work here goes in depth into it. My only concern is that they train 64 models ranging from 470M to 2.8B parameters and it's not clear if this applies to models with larger parameters.
If you are having some issues in Amharic llms try to add some python code data and see if it improves. I'll soon update you on it, once I got the results.
So apparently adding some code data in your pretraining data increases reasoning and improves non-code tasksπ€. I've seen this in a work from Neurips 2023 led by Niklas Muennighoff and now this work here goes in depth into it. My only concern is that they train 64 models ranging from 470M to 2.8B parameters and it's not clear if this applies to models with larger parameters.
If you are having some issues in Amharic llms try to add some python code data and see if it improves. I'll soon update you on it, once I got the results.
β€βπ₯7π1