Chapi Dev Talks
8.55K subscribers
950 photos
109 videos
12 files
596 links
My name is Chapi and I am a Developer.

I post my thoughts about tech here.

Message to our assistant to give U Feedback: @sophiservebot

Join https://t.me/chapidevtalks_group

Urgent? Contact Me: @chapimenge (Don't say hi or ask Meta Questions )
Download Telegram
really loved the new aws look and the discount for Dynamodb

Amazon DynamoDB reduces prices for on-demand throughput by 50% and global tables by up to 67%.



we use dynamodb a lot
๐Ÿ‘10
You can only know what makes your program slow after first getting the program to give correct results, then running it to see if the correct program is slow. When found to be slow, profiling can show what parts of the program are consuming most of the time. A comprehensive but quick-to-run test suite can then ensure that future optimizations don't change the correctness of your program. In short:

1. Get it right.
2. Test it's right.
3. Profile if slow.
4. Optimise.
5. Repeat from 2.

Source
๐Ÿ‘31๐Ÿค”1
IMG_20241118_172629_090.jpg
10.8 KB
Story of My Recent Days

I was working with a very large csv data, and i want to merge 4 very large csv files based on one col and pandas wasn't able to handle it so i decided to change my approach and process the files separately.

The thing is there is 2 tasks that have to be done on it

1. Process it and add to DB based on all the files [CPU Bound]
2. Download file and upload it to S3 and update the column with the S3 link [IO Bound]

So the first task is really fast since it all depends on the CPU i kinda get a good speed optimization already but the second task is taking more than one day to finish. Here is the bummer the task have to run every day ๐Ÿ˜‚ and it is taking more than a day to complete the task.

But i come up with the solution to use multiple machine and separate out the task to handle the IO bound tasks like downloading and uploading file.

When i say downloading file i am talking about millions of files don't ask me why the bottom line is i have to download it and upload it to S3.

Anyways I just separate out processing of the files to multiple files and i am using asyncio to its peak and not to get blocked by the websites too.

Now it is gonna cut down to half the time to process the files and i am happy with it.

Moral of the story is if you are dealing with IO Bound Task may be try multiple machine to handle it.

I have got couple of more stories to share but too lazy to write it down ๐Ÿ˜‚.
1๐Ÿ‘9โค1๐Ÿ”ฅ1
Forwarded from Pavel Durov (Paul Du Rove)
๐Ÿ™ƒ This is the biggest update in the history of Mini Apps: full-screen mode, home screen shortcuts, geolocation, motion tracking, media sharing, document creation, gift sending, subscription tiers, emoji statuses โ€” and much more ๐Ÿ˜Ž

๐Ÿ•บ To explore some of these new features, update Telegram and check out these early examples:

Tiny Verse opens in full-screen, which looks great on desktops and tablets. Make sure to swipe and zoom to admire the 3D effects ๐Ÿ˜Š

Playdeck's task section now features an "Add to Home Screen" option โ€” and a flying Yeti that moves based on your deviceโ€™s orientation ๐Ÿ˜™

Major has added a custom loading screen and the new Major Maze mini-game, where you can guide a rolling ball by tilting your phone ๐Ÿ˜Ž

๐Ÿ˜ This is just the beginning โ€” all discovered within the first day of Mini Apps 2.0's launch! ๐Ÿ˜ฎ
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘3
Pavel Durov
Video
This is insane ๐Ÿคฏ
Chapi Dev Talks
IMG_20241118_172629_090.jpg
To Shade some light how big the csv files are.

๐Ÿคฏ
๐Ÿคฏ19๐Ÿ‘1
Chapi Dev Talks
To Shade some light how big the csv files are. ๐Ÿคฏ
Based on @frectonz recommendation to change it to sqlite it might get me a huge performance on processing part of the big csv, since file size doesn't matter for us but the speed is a huge gain for us.

plus there were a lot of duplication in the csv rows and we didn't notice that until today.

after a bit of experiment even tho the sqlite file size increased i think the query time is much faster than the normal looping so i think i am changing the approach a bit.

So the idea is to merge and process 4 csv file so i am going to change 3 csv to sqlite and looping through one csv and getting the file from other 3 might be the best approach i have at the moment.

just like

for i in big_csv:
result_1 = cursor.execute(f"SELECT * FROM table WHERE id = {i}")
result_2 = cursor.execute(f"SELECT * FROM table WHERE id = {i}")
result_3 = cursor.execute(f"SELECT * FROM table WHERE id = {i}")
# do something with the result


tho this doesn't help much for file downloading part but i think it's a good start for processing part.

anyways thanks @frectonz for the recommendation it super cool to have such community.
โšก21๐Ÿ‘1
Forwarded from Hacker News
Show HN: Embed an SQLite database in your PostgreSQL table (Score: 150+ in 11 hours)

Link: https://readhacker.news/s/6icWC
Comments: https://readhacker.news/c/6icWC

pglite-fusion is a PostgreSQL extension that allows you to embed SQLite databases into your PostgreSQL tables by enabling the creation of columns with the `SQLITE` type. This means every row in the table can have an embedded SQLite database.
In addition to the PostgreSQL `SQLITE` type, pglite-fusion provides the `query_sqlite`` function for querying SQLite databases and the `execute_sqlite` function for updating them. Additional functions are listed in the projectโ€™s README.
The pglite-fusion extension is written in Rust using the pgrx framework [1].
----
Implementation Details
The PostgreSQL `SQLITE` type is stored as a CBOR-encoded `Vec<u8>`. When a query is made, this `Vec<u8>` is written to a random file in the `/tmp` directory. SQLite then loads the file, performs the query, and returns the result as a table containing a single row with an array of JSON-encoded values.
The `execute_sqlite` function follows a similar process. However, instead of returning query results, it returns the contents of the SQLite file (stored in `/tmp`) as a new `SQLITE` instance.
[1] https://github.com/pgcentralfoundation/pgrx
โšก20๐Ÿ”ฅ6โค2๐Ÿ‘1
This is not related to tech at all but I urge you to find couple of minutes and read the below story.

https://www-bbc-com.cdn.ampproject.org/c/s/www.bbc.com/amharic/articles/c2e714vekk1o.amp

I really am out of word at this point. May God help us and follow the right path.

Out kids might live in this country at least for most people and do we really give this to our kids.

Hopefully our generation will bring the good out of humanity.

Thank you for all of you who reads this. ๐Ÿ™
๐Ÿ˜ญ24
Me going out to office from home(aka around semit fiyel bet)

Me arrived at the office (aka 22)

Me go upstairs 4 floors fyi no lift

Me realising I forgot my office key Infront of the door ๐Ÿ˜‚

Me now drinking coffee eventho I have to work trying to decide what to do ๐Ÿ˜‚๐Ÿ˜‚(aka anqi coffee)
๐Ÿ˜41๐Ÿ‘4๐Ÿ”ฅ1