Daily
hard one, wasn't able to solve on my own. The solution is not that intuitive, therefore, if you don't have much time - don't do it
https://leetcode.com/problems/trapping-rain-water-ii/submissions/1790207518/
#daily
hard one, wasn't able to solve on my own. The solution is not that intuitive, therefore, if you don't have much time - don't do it
https://leetcode.com/problems/trapping-rain-water-ii/submissions/1790207518/
#daily
π4
Daily
The most classic one, most probably all of you solved it many times as it is 11th problem:
https://leetcode.com/problems/container-with-most-water/?envType=daily-question&envId=2025-10-04
#daily
The most classic one, most probably all of you solved it many times as it is 11th problem:
https://leetcode.com/problems/container-with-most-water/?envType=daily-question&envId=2025-10-04
#daily
π5
https://www.youtube.com/watch?v=21EYKqUsPfg&t=2411s
Highly recommended, I like this guy (older), what he says makes a lot of sense
Highly recommended, I like this guy (older), what he says makes a lot of sense
YouTube
Richard Sutton β Father of RL thinks LLMs are a dead end
Richard Sutton is the father of reinforcement learning, winner of the 2024 Turing Award, and author of The Bitter Lesson. And he thinks LLMs are a dead end. After interviewing him, my steel man of Richardβs position is this: LLMs arenβt capable of learningβ¦
π€3
Daily
Personally enjoyed, even though it wasn't 10 mins for me.
Hint to understand the description, you need to return ALL cells that will lead rain watter to BOTH atlantic and pacific (not paths, just the fact that they will pour to both).
This means that left row and upper row will always be pouring to pacific as well as right row and bottom row always be pouring to atlantic
https://leetcode.com/problems/pacific-atlantic-water-flow/description/?envType=daily-question&envId=2025-10-05
#daily
Personally enjoyed, even though it wasn't 10 mins for me.
Hint to understand the description, you need to return ALL cells that will lead rain watter to BOTH atlantic and pacific (not paths, just the fact that they will pour to both).
This means that left row and upper row will always be pouring to pacific as well as right row and bottom row always be pouring to atlantic
https://leetcode.com/problems/pacific-atlantic-water-flow/description/?envType=daily-question&envId=2025-10-05
#daily
π€―1
Daily
Don't be scared, it is not that hard. Just very stupidly formed question. It asks basically to find the path from (0,0) to (n-1,m-1) with the minimum values inside the path
https://leetcode.com/problems/swim-in-rising-water/?envType=daily-question&envId=2025-10-06
#daily
Don't be scared, it is not that hard. Just very stupidly formed question. It asks basically to find the path from (0,0) to (n-1,m-1) with the minimum values inside the path
https://leetcode.com/problems/swim-in-rising-water/?envType=daily-question&envId=2025-10-06
#daily
π₯2
120 czk, worth every fucking single crown
π€―9π1
andreyka26_se
120 czk, worth every fucking single crown
The only gun I can afford with MS salary(
I bought it only cause Vlad paid me for system design lecturesπ
I bought it only cause Vlad paid me for system design lecturesπ
β€7π7
Daily
ANNOYING SHIT.
When you see acceptance rate 33% for mid problem, it always will be, but what I hate the most is again fucking misalignment in complexities. (Explanation in comment along with the solution).
Also understandable rephrasing of the problem in comment as well. Description is FUCKING SHIT, I don't know what motherfuckers are sitting on leetcode who create these descriptions. Literally every single person complained about the descriptions.
#daily
ANNOYING SHIT.
When you see acceptance rate 33% for mid problem, it always will be, but what I hate the most is again fucking misalignment in complexities. (Explanation in comment along with the solution).
Also understandable rephrasing of the problem in comment as well. Description is FUCKING SHIT, I don't know what motherfuckers are sitting on leetcode who create these descriptions. Literally every single person complained about the descriptions.
#daily
π€―4π1
Yes, reminder for guys like me, python does have the datastructure that has
- O(log n) for search (including index search)
- O(log n) for insert
- O(log n) for deletion by index
This DS is called
- O(log n) for search (including index search)
- O(log n) for insert
- O(log n) for deletion by index
This DS is called
SortedListhttps://www.hellointerview.com/learn/system-design/problem-breakdowns/bitly
omg, check it. I have to create my version, because in general people propose SO BAD solutions for core problem here: "generate distributed unique values"
omg, check it. I have to create my version, because in general people propose SO BAD solutions for core problem here: "generate distributed unique values"
Hellointerview
Design a URL Shortener Like Bit.ly | Hello Interview System Design in a Hurry
System design answer key for designing a URL shortener like Bit.ly, built by FAANG managers and staff engineers.
π3
andreyka26_se
https://www.hellointerview.com/learn/system-design/problem-breakdowns/bitly omg, check it. I have to create my version, because in general people propose SO BAD solutions for core problem here: "generate distributed unique values"
I will explain, there are 3 solutions proposed, and among them there are 0 good solutions.
First: just bad, let's skip.
Second: hashfunction => basically get hash of input url and then get base62 encoding
ofc there will be collisions, and the proposal: just retry until you succeed (add UNIQUE constraint)
Like wtf. How many database retries there will be once we approach our non-func req 1B urls?????
ALSO THE GUY THINKS THIS IS THE SAME AS SNOWFLAKE ID GENERATION PATTERN. I know he is stuff from meta, but man, snowflake and this hashing are TWO DIFFERENT approaches completely.
Third: unique counter via Redis. They proposed Redis cause it is fast. It will not be fast, cause you will do fsync for each operation as you don't want to loose data, otherwise "uniqueness" is not guaranteed.
Second thing, they propose default "redis" replication, which is async replication which again introduces data loss.
First: just bad, let's skip.
Second: hashfunction => basically get hash of input url and then get base62 encoding
base62(hash(input_url))[:8]. Take first 8 characters from the encoded values.ofc there will be collisions, and the proposal: just retry until you succeed (add UNIQUE constraint)
Like wtf. How many database retries there will be once we approach our non-func req 1B urls?????
ALSO THE GUY THINKS THIS IS THE SAME AS SNOWFLAKE ID GENERATION PATTERN. I know he is stuff from meta, but man, snowflake and this hashing are TWO DIFFERENT approaches completely.
Third: unique counter via Redis. They proposed Redis cause it is fast. It will not be fast, cause you will do fsync for each operation as you don't want to loose data, otherwise "uniqueness" is not guaranteed.
Second thing, they propose default "redis" replication, which is async replication which again introduces data loss.
andreyka26_se
I will explain, there are 3 solutions proposed, and among them there are 0 good solutions. First: just bad, let's skip. Second: hashfunction => basically get hash of input url and then get base62 encoding base62(hash(input_url))[:8]. Take first 8 charactersβ¦
I mean, why it is considered "THE BEST" resource, I don't know honestly. Some of the system designs here are great, I personally learnt a lot when I was prepping, but this system design is very bad.
What I consider a good approach?
1) snowflake id, stateless, infinitely scalable, cheap, real prod applied.
2) shards with prefixes: build multiple counters that start from 0 to N. Add 1-2 digits for "shard id". Round robin them. This will ensure both uniqueness and scalability
These are 2 "BEST" choices here.
You also can precompute all the ids, and then just reserve them with optimistic concurrency (+ apply sharding). But that would be so-so
What I consider a good approach?
1) snowflake id, stateless, infinitely scalable, cheap, real prod applied.
2) shards with prefixes: build multiple counters that start from 0 to N. Add 1-2 digits for "shard id". Round robin them. This will ensure both uniqueness and scalability
These are 2 "BEST" choices here.
You also can precompute all the ids, and then just reserve them with optimistic concurrency (+ apply sharding). But that would be so-so
andreyka26_se
Photo
especially the reply to the comment, this disappoints me the most....
Daily
This task was either in top 75 or top 100 liked, so it is good mid question.
https://leetcode.com/problems/successful-pairs-of-spells-and-potions/description/?envType=daily-question&envId=2025-10-08
#daily
This task was either in top 75 or top 100 liked, so it is good mid question.
https://leetcode.com/problems/successful-pairs-of-spells-and-potions/description/?envType=daily-question&envId=2025-10-08
#daily
How frequently you solve Daily leetcode challenge
Anonymous Poll
20%
(almost) everyday
15%
couple of times per week
24%
couple of times per month
41%
never
π₯2
andreyka26_se
Evntually exclusive contentπππ
It is only for you people. Instagram and tiktok wonβt see it ππ
β€5
My friend asked me about "how would you scale the game matchmaking system". This is actually a very good system design.
I have no idea of the requirements, but this is something that I came up with in 15 mins.
A lot of stuff here are out of scope and missed
Concept is a bit similar to "Virtual queue" in Ticket Master system design. When you have shared sorted set and need to pop N out of them
I have no idea of the requirements, but this is something that I came up with in 15 mins.
A lot of stuff here are out of scope and missed
Concept is a bit similar to "Virtual queue" in Ticket Master system design. When you have shared sorted set and need to pop N out of them
π7