Henok | Neural Nets
1.61K subscribers
233 photos
20 videos
13 files
157 links
Download Telegram
Some people were asking for some courses. I think this is a very good one
This media is not supported in your browser
VIEW IN TELEGRAM
Let's replace our CEOs with AI lol
๐Ÿ˜3๐Ÿ‘1
Reading is the most important and valuable (iโ€™d argue even more than writing, running experiments etc) aspect of doing research and now we have gen ai โ€œreadโ€ and โ€œsummariseโ€ scientific workโ€ฆ we are sleepwalking into mediocrity

Source
๐Ÿ‘3โšก1๐Ÿ˜1๐Ÿคฏ1
This media is not supported in your browser
VIEW IN TELEGRAM
Video Generation ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

Flux with Lora + Gen-3 Alpha image-to-video.
๐Ÿ”ฅ5โšก2โคโ€๐Ÿ”ฅ1๐Ÿ‘1๐Ÿ‘1
There's always something in winning at the Olympics.
โค9โšก3๐Ÿ”ฅ2
Flux 1

I was playing around with flux 1 to see how the samplers which are the methods used to reverse the diffusion process during image generation, affect the generated images in flux1. I tried a few and here is the ones from Euler(Red shirt) and DDIM(the one with Green yellow red shirt).

Oh, incase you are wondering what sampling is:
To produce an image, Stable Diffusion first generates a completely random image in the latent space. The noise predictor then estimates the noise of the image. The predicted noise is subtracted from the image. This process is repeated a dozen times. In the end, you get a clean image.

This denoising process is called sampling because Stable Diffusion generates a new sample image in each step. The method used in sampling is called the sampler or sampling method.


Here is the code if you want to play around, I think it's also on replicate and you can give it a try without running a code.
๐Ÿ”ฅ3โค2๐Ÿ‘1
One step closer to the dream. I just touched a cyber truck.
๐Ÿ”ฅ26
I guess I can finally call myself a researcher now lol ! Itโ€™s always been a dream to present my work.
๐Ÿ”ฅ25๐ŸŽ‰7โค5
I got many requests and questions about research and ML in the past few days and today I want to make a group to work on something. Probably this could be your first research work. To make the best out of it, I'll take 5-6 people as core members and incase we need more people we'll add some.

If you got any interesting ideas or maybe if you are curios about AI research, come join us.

The target is to make a cool work and hopefully publish a paper.

I'll try to reply for every DM and we will see if you are a great match for this.โœŒ๏ธ
โค9๐Ÿ”ฅ8
To Code, or Not To Code? Exploring Impact of Code in Pre-training

So apparently adding some code data in your pretraining data increases reasoning and improves non-code tasks๐Ÿค”. I've seen this in a work from Neurips 2023 led by Niklas Muennighoff and now this work here goes in depth into it. My only concern is that they train 64 models ranging from 470M to 2.8B parameters and it's not clear if this applies to models with larger parameters.

If you are having some issues in Amharic llms try to add some python code data and see if it improves. I'll soon update you on it, once I got the results.
โคโ€๐Ÿ”ฅ7๐Ÿ˜1
buildspace is closing๐Ÿ˜ž
๐Ÿ˜ข3
Programming is changing so fast... I'm trying VS Code Cursor + Sonnet 3.5 instead of GitHub Copilot again and I think it's now a net win. Just empirically, over the last few days most of my "programming" is now writing English (prompting and then reviewing and editing the generated diffs), and doing a bit of "half-coding" where you write the first chunk of the code you'd like, maybe comment it a bit so the LLM knows what the plan is, and then tab tab tab through completions. Sometimes you get a 100-line diff to your code that nails it, which could have taken 10+ minutes before.

I still don't think I got sufficiently used to all the features. It's a bit like learning to code all over again but I basically can't imagine going back to "unassisted" coding at this point, which was the only possibility just ~3 years ago.



Source Karpathy
๐Ÿ‘15๐Ÿคฎ3โค2
This is why @baydis and @beka_cru and many of you couldn't make it to YC lol
๐Ÿคฃ7๐Ÿ˜4
Forwarded from Frectonz
My nixpkgs PR got merged after 2 weeks. I packaged my ethiopian calendar TUI app mekuteriya for nix.

nix shell nixpkgs#mekuteriya


I'm officially a NixOS package maintainer now.

https://github.com/NixOS/nixpkgs/pull/333690

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰
โšก12๐Ÿ”ฅ1
Alright alright, what's good people๐Ÿ˜
๐Ÿ˜ฑ5๐Ÿ˜4โšก1
Guess what, he loves Ethiopian food too. Will get a lunch with him some day
โšก11โค3