Aya Expanse from Cohere for AI, it beats most open source models like Llama and Gemma in same parameters comparison. It can also support speech and image this time.
You can try it here: https://huggingface.co/spaces/CohereForAI/aya_expanse
You can try it here: https://huggingface.co/spaces/CohereForAI/aya_expanse
π₯3
Nonsense, what does this even mean? Are people trying their best to coin terms and claim they are the father of it or what. Also, the decentralized part is the funny one, what part is centralized in the first place to be decentralized?
I see this as "Hey, our company is called Singularity, where we do decentralized AGI. Translation: if the robots take over, at least it'll be a team effort π
I see this as "Hey, our company is called Singularity, where we do decentralized AGI. Translation: if the robots take over, at least it'll be a team effort π
π€2π1π€£1
Forwarded from Dagmawi Babi
On November 15, we are going to have an incredible conversation with Guillermo Rauch, the founder and CEO of Vercel; the platform most of us love and use! π
This's going to be a first for Ethiopia and our tech community, it's going to be fascinating and influential. I am excited for this and I hope you are too!π
Originally from LanΓΊs, Buenos Aires, Argentina. He was involved in creating numerous influential open source projects and his first company Cloudup was acquired by Automattic. He later created NextJS, the most popular react framework and then Vercel the cloud infrastructure focused on DX.π₯
Along many things he has also authored one of the first books on Node.js, authored Mongoose, SocketIO and Hyper and so much more.π€
This is a long conversation about his early childhood, family and friends, relationships and belief, tech and hobbies, thoughts on sensitive and philosophical topics and much more. We're all going to learn a-lot from him.π₯°
The session will be a video podcast and it will be recorded and uploaded. Can't wait and until then think of questions you'd like to ask him!π₯³
#GuillermoRauch #Vercel
@DagmawiBabiPodcasts
This's going to be a first for Ethiopia and our tech community, it's going to be fascinating and influential. I am excited for this and I hope you are too!
Originally from LanΓΊs, Buenos Aires, Argentina. He was involved in creating numerous influential open source projects and his first company Cloudup was acquired by Automattic. He later created NextJS, the most popular react framework and then Vercel the cloud infrastructure focused on DX.
Along many things he has also authored one of the first books on Node.js, authored Mongoose, SocketIO and Hyper and so much more.
This is a long conversation about his early childhood, family and friends, relationships and belief, tech and hobbies, thoughts on sensitive and philosophical topics and much more. We're all going to learn a-lot from him.
The session will be a video podcast and it will be recorded and uploaded. Can't wait and until then think of questions you'd like to ask him!
#GuillermoRauch #Vercel
@DagmawiBabiPodcasts
Please open Telegram to view this post
VIEW IN TELEGRAM
β€5π₯4π€―2π1π±1
ChatGPT search,
I just tried ChatGPT search and it's actually nice to see good summaries for search, I've tried it for a few times including for CodeNight and it actually gets the Telegram and Github links correctly and also the members number too.
I think Google search is going to become obsolete with this in the long run.
I just tried ChatGPT search and it's actually nice to see good summaries for search, I've tried it for a few times including for CodeNight and it actually gets the Telegram and Github links correctly and also the members number too.
I think Google search is going to become obsolete with this in the long run.
β‘14
Darwin Was a Slacker and You Should Be Too
https://nautil.us/darwin-was-a-slacker-and-you-should-be-too-236532/?utm_source=tw-naut&utm_medium=organic-social
β‘1
People with access to A100s really donβt know how good they have it. Meanwhile, Iβm out here rationing compute like itβs wartimeπ.
π6
TransformerRanker is a library that quickly finds the best-suited language model for a given NLP classification task.
All you need to do is to select a dataset and a list of pre-trained language models (LMs) from the π€ HuggingFace Hub. TransformerRanker will quickly estimate which of these LMs will perform best on the given task!
https://github.com/flairNLP/transformer-ranker
All you need to do is to select a dataset and a list of pre-trained language models (LMs) from the π€ HuggingFace Hub. TransformerRanker will quickly estimate which of these LMs will perform best on the given task!
https://github.com/flairNLP/transformer-ranker
GitHub
GitHub - flairNLP/transformer-ranker: Efficiently find the best-suited language model (LM) for your NLP task
Efficiently find the best-suited language model (LM) for your NLP task - flairNLP/transformer-ranker
π₯7
Check out our paper that uses proverbs to evaluate LLMs. We saw a few things that could be further studied. Even though we didn't find it first, still order of choice matters a lot of other things.
https://arxiv.org/abs/2411.05049
https://arxiv.org/abs/2411.05049
arXiv.org
ProverbEval: Exploring LLM Evaluation Challenges for Low-resource...
With the rapid development of evaluation datasets to assess LLMs understanding across a wide range of subjects and domains, identifying a suitable language understanding benchmark has become...
π₯6
Forwarded from Dagmawi Babi
β’ YouTube β’ Spotify β’ Apple Podcasts β’ Pocket Casts β’ Goodpods β’ Castbox β’ RSS Feed β’ TerakiApp β’
Enjoy Everywhere!
Please open Telegram to view this post
VIEW IN TELEGRAM
π₯4π2π2π₯°1π1
I'm thinking of renaming this channel so that I can write anything but not limited to AI only.
Maybe, I'll add my name to it, what do you guys think?
Maybe, I'll add my name to it, what do you guys think?
Henok | Neural Nets
π14β‘4β€2π1
Forwarded from Chapi Dev Talks
IMG_20241118_172629_090.jpg
10.8 KB
Story of My Recent Days
I was working with a very large csv data, and i want to merge 4 very large csv files based on one col and pandas wasn't able to handle it so i decided to change my approach and process the files separately.
The thing is there is 2 tasks that have to be done on it
1. Process it and add to DB based on all the files [CPU Bound]
2. Download file and upload it to S3 and update the column with the S3 link [IO Bound]
So the first task is really fast since it all depends on the CPU i kinda get a good speed optimization already but the second task is taking more than one day to finish. Here is the bummer the task have to run every day π and it is taking more than a day to complete the task.
But i come up with the solution to use multiple machine and separate out the task to handle the IO bound tasks like downloading and uploading file.
When i say downloading file i am talking about millions of files don't ask me why the bottom line is i have to download it and upload it to S3.
Anyways I just separate out processing of the files to multiple files and i am using asyncio to its peak and not to get blocked by the websites too.
Now it is gonna cut down to half the time to process the files and i am happy with it.
Moral of the story is if you are dealing with IO Bound Task may be try multiple machine to handle it.
I have got couple of more stories to share but too lazy to write it down π.
I was working with a very large csv data, and i want to merge 4 very large csv files based on one col and pandas wasn't able to handle it so i decided to change my approach and process the files separately.
The thing is there is 2 tasks that have to be done on it
1. Process it and add to DB based on all the files [CPU Bound]
2. Download file and upload it to S3 and update the column with the S3 link [IO Bound]
So the first task is really fast since it all depends on the CPU i kinda get a good speed optimization already but the second task is taking more than one day to finish. Here is the bummer the task have to run every day π and it is taking more than a day to complete the task.
But i come up with the solution to use multiple machine and separate out the task to handle the IO bound tasks like downloading and uploading file.
When i say downloading file i am talking about millions of files don't ask me why the bottom line is i have to download it and upload it to S3.
Anyways I just separate out processing of the files to multiple files and i am using asyncio to its peak and not to get blocked by the websites too.
Now it is gonna cut down to half the time to process the files and i am happy with it.
Moral of the story is if you are dealing with IO Bound Task may be try multiple machine to handle it.
I have got couple of more stories to share but too lazy to write it down π.
π₯1