📥 دریافت شده از: Alex Xu
-------------
How does Netflix scale push messaging for millions of devices?
.
.
This post draws from an article published on Netflix’s engineering blog. Here’s my understanding of how the online streaming giant’s system works.
𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 & 𝐬𝐜𝐚𝐥𝐞
- 220 million users
- Near real-time
- Backend systems need to send notifications to various clients
- Supported clients: iOS, Android, smart TVs, Roku, Amazon FireStick, web browser
𝐓𝐡𝐞 𝐥𝐢𝐟𝐞 𝐨𝐟 𝐚 𝐩𝐮𝐬𝐡 𝐧𝐨𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
1. Push notification events are triggered by the clock, user actions, or by systems.
2. Events are sent to the event management engine.
3. The event management engine listens to specific events and forward events to different queues. The queues are populated by priority-based event forwarding rules.
4. The “event priority-based processing cluster” processes events and generates push notifications data for devices.
5. A Cassandra database is used to store the notification data.
6. A push notification is sent to outbound messaging systems.
7. For Android, FCM is used to send push notifications. For Apple devices, APNs are used. For web, TV, and other streaming devices, Netflix’s homegrown solution called ‘Zuul Push’ is used.
Over to you: if you wanted to support every kind of device, which delivery model would work better, push or pull-based notifications?
–
Subscribe to our weekly newsletter to learn something new every week ⇩:
https://bit.ly/3FEGliw
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
-------------
How does Netflix scale push messaging for millions of devices?
.
.
This post draws from an article published on Netflix’s engineering blog. Here’s my understanding of how the online streaming giant’s system works.
𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 & 𝐬𝐜𝐚𝐥𝐞
- 220 million users
- Near real-time
- Backend systems need to send notifications to various clients
- Supported clients: iOS, Android, smart TVs, Roku, Amazon FireStick, web browser
𝐓𝐡𝐞 𝐥𝐢𝐟𝐞 𝐨𝐟 𝐚 𝐩𝐮𝐬𝐡 𝐧𝐨𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
1. Push notification events are triggered by the clock, user actions, or by systems.
2. Events are sent to the event management engine.
3. The event management engine listens to specific events and forward events to different queues. The queues are populated by priority-based event forwarding rules.
4. The “event priority-based processing cluster” processes events and generates push notifications data for devices.
5. A Cassandra database is used to store the notification data.
6. A push notification is sent to outbound messaging systems.
7. For Android, FCM is used to send push notifications. For Apple devices, APNs are used. For web, TV, and other streaming devices, Netflix’s homegrown solution called ‘Zuul Push’ is used.
Over to you: if you wanted to support every kind of device, which delivery model would work better, push or pull-based notifications?
–
Subscribe to our weekly newsletter to learn something new every week ⇩:
https://bit.ly/3FEGliw
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
🔥2
📥 دریافت شده از: Alex Xu
-------------
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐟𝐨𝐫 𝐝𝐞𝐩𝐥𝐨𝐲𝐢𝐧𝐠 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐭𝐨 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧?
The diagram below shows several common 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬.
𝐁𝐢𝐠 𝐁𝐚𝐧𝐠 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
Big Bang Deployment is quite straightforward, where we just roll out a new version in one go with service downtime. Preparation is essential for this strategy. We roll back to the previous version if the deployment fails.
💡 No downtime ❌
💡 Targeted users ❌
𝐑𝐨𝐥𝐥𝐢𝐧𝐠 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
Rolling Deployment applies phased deployment compared with big bang deployment. The whole plant is upgraded one by one over a period of time.
💡 No downtime ✅
💡 Targeted users ❌
𝐁𝐥𝐮𝐞-𝐆𝐫𝐞𝐞𝐧 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
In blue-green deployment, two environments are deployed in production simultaneously. The QA team performs various tests on the green environment. Once the green environment passes the tests, the load balancer switches users to it.
💡 No downtime ✅
💡 Targeted users ❌
𝐂𝐚𝐧𝐚𝐫𝐲 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
With canary deployment, only a small portion of instances are upgraded with the new version, once all the tests pass, a portion of users are routed to canary instances.
💡 No downtime ✅
💡 Targeted users ❌
𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐓𝐨𝐠𝐠𝐥𝐞
With feature toggle, A small portion of users with a specific flag go through the code of the new feature, while other users go through normal code. This can be used in combination of other strategies: either the new branch of code is upgraded in one go, or only a few instances are upgraded with new code.
💡 No downtime ✅
💡 Targeted users ✅
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
-------------
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐩𝐫𝐨𝐜𝐞𝐬𝐬 𝐟𝐨𝐫 𝐝𝐞𝐩𝐥𝐨𝐲𝐢𝐧𝐠 𝐜𝐡𝐚𝐧𝐠𝐞𝐬 𝐭𝐨 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧?
The diagram below shows several common 𝐝𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬.
𝐁𝐢𝐠 𝐁𝐚𝐧𝐠 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
Big Bang Deployment is quite straightforward, where we just roll out a new version in one go with service downtime. Preparation is essential for this strategy. We roll back to the previous version if the deployment fails.
💡 No downtime ❌
💡 Targeted users ❌
𝐑𝐨𝐥𝐥𝐢𝐧𝐠 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
Rolling Deployment applies phased deployment compared with big bang deployment. The whole plant is upgraded one by one over a period of time.
💡 No downtime ✅
💡 Targeted users ❌
𝐁𝐥𝐮𝐞-𝐆𝐫𝐞𝐞𝐧 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
In blue-green deployment, two environments are deployed in production simultaneously. The QA team performs various tests on the green environment. Once the green environment passes the tests, the load balancer switches users to it.
💡 No downtime ✅
💡 Targeted users ❌
𝐂𝐚𝐧𝐚𝐫𝐲 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
With canary deployment, only a small portion of instances are upgraded with the new version, once all the tests pass, a portion of users are routed to canary instances.
💡 No downtime ✅
💡 Targeted users ❌
𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐓𝐨𝐠𝐠𝐥𝐞
With feature toggle, A small portion of users with a specific flag go through the code of the new feature, while other users go through normal code. This can be used in combination of other strategies: either the new branch of code is upgraded in one go, or only a few instances are upgraded with new code.
💡 No downtime ✅
💡 Targeted users ✅
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
👍6
How can Redis be used?
There is more to Redis than just caching.
.
Redis can be used in a variety of scenarios as shown in the diagram.
🔹Session
We can use Redis to share user session data among different services.
🔹Cache
We can use Redis to cache objects or pages, especially for hotspot data.
🔹Distributed lock
We can use a Redis string to acquire locks among distributed services.
🔹Counter
We can count how many likes or how many reads for articles.
🔹Rate limiter
We can apply a rate limiter for certain user IPs.
🔹Global ID generator
We can use Redis Int for global ID.
🔹Shopping cart
We can use Redis Hash to represent key-value pairs in a shopping cart.
🔹Calculate user retention
We can use Bitmap to represent the user login daily and calculate user retention.
🔹Message queue
We can use List for a message queue.
🔹Ranking
We can use ZSet to sort the articles.
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
There is more to Redis than just caching.
.
Redis can be used in a variety of scenarios as shown in the diagram.
🔹Session
We can use Redis to share user session data among different services.
🔹Cache
We can use Redis to cache objects or pages, especially for hotspot data.
🔹Distributed lock
We can use a Redis string to acquire locks among distributed services.
🔹Counter
We can count how many likes or how many reads for articles.
🔹Rate limiter
We can apply a rate limiter for certain user IPs.
🔹Global ID generator
We can use Redis Int for global ID.
🔹Shopping cart
We can use Redis Hash to represent key-value pairs in a shopping cart.
🔹Calculate user retention
We can use Bitmap to represent the user login daily and calculate user retention.
🔹Message queue
We can use List for a message queue.
🔹Ranking
We can use ZSet to sort the articles.
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
🔥1🕊1
Why is Nginx called a “𝐫𝐞𝐯𝐞𝐫𝐬𝐞” proxy?
.
.
The diagram below shows the differences between a 𝐟𝐨𝐫𝐰𝐚𝐫𝐝 𝐩𝐫𝐨𝐱𝐲 and a 𝐫𝐞𝐯𝐞𝐫𝐬𝐞 𝐩𝐫𝐨𝐱𝐲.
🔹 A forward proxy is a server that sits between user devices and the internet.
A forward proxy is commonly used for:
1️⃣ Protect clients
2️⃣ Avoid browsing restrictions
3️⃣ Block access to certain content
🔹 A reverse proxy is a server that accepts a request from the client, forwards the request to web servers, and returns the results to the client as if the proxy server had processed the request.
A reverse proxy is good for:
1️⃣ Protect servers
2️⃣ Load balancing
3️⃣ Cache static contents
4️⃣ Encrypt and decrypt SSL communications
—
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
.
.
The diagram below shows the differences between a 𝐟𝐨𝐫𝐰𝐚𝐫𝐝 𝐩𝐫𝐨𝐱𝐲 and a 𝐫𝐞𝐯𝐞𝐫𝐬𝐞 𝐩𝐫𝐨𝐱𝐲.
🔹 A forward proxy is a server that sits between user devices and the internet.
A forward proxy is commonly used for:
1️⃣ Protect clients
2️⃣ Avoid browsing restrictions
3️⃣ Block access to certain content
🔹 A reverse proxy is a server that accepts a request from the client, forwards the request to web servers, and returns the results to the client as if the proxy server had processed the request.
A reverse proxy is good for:
1️⃣ Protect servers
2️⃣ Load balancing
3️⃣ Cache static contents
4️⃣ Encrypt and decrypt SSL communications
—
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
⚡2👍1🏆1
❎8 Data Structures That Power Your Databases. Which one should we pick?
The answer will vary depending on your use case. Data can be indexed in memory or on disk. Similarly, data formats vary, such as numbers, strings, geographic coordinates, etc. The system might be write-heavy or read-heavy. All of these factors affect your choice of database index format.
The following are some of the most popular data structures used for indexing data:
🔹Skiplist: a common in-memory index type. Used in Redis
🔹Hash index: a very common implementation of the “Map” data structure (or “Collection”)
🔹SSTable: immutable on-disk “Map” implementation
🔹LSM tree: Skiplist + SSTable. High write throughput
🔹B-tree: disk-based solution. Consistent read/write performance
🔹Inverted index: used for document indexing. Used in Lucene
🔹Suffix tree: for string pattern search
🔹R-tree: multi-dimension search, such as finding the nearest neighbor
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
The answer will vary depending on your use case. Data can be indexed in memory or on disk. Similarly, data formats vary, such as numbers, strings, geographic coordinates, etc. The system might be write-heavy or read-heavy. All of these factors affect your choice of database index format.
The following are some of the most popular data structures used for indexing data:
🔹Skiplist: a common in-memory index type. Used in Redis
🔹Hash index: a very common implementation of the “Map” data structure (or “Collection”)
🔹SSTable: immutable on-disk “Map” implementation
🔹LSM tree: Skiplist + SSTable. High write throughput
🔹B-tree: disk-based solution. Consistent read/write performance
🔹Inverted index: used for document indexing. Used in Lucene
🔹Suffix tree: for string pattern search
🔹R-tree: multi-dimension search, such as finding the nearest neighbor
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🔰 @gopher_academy
👍1👏1
🔵Netflix Tech Stack - (CI/CD Pipeline)
🔴Planing: Netflix Engineering uses JIRA for planning and Confluence for documentation.
🔴Coding: Java is the primary programming language for the backend service, while other languages are used for different use cases.
🔴Build: Gradle is mainly used for building, and Gradle plugins are built to support various use cases.
🔴Packaging: Package and dependencies are packed into an Amazon Machine Image (AMI) for release.
🔴Testing: Testing emphasizes the production culture's focus on building chaos tools.
🔴Deployment: Netflix uses its self-built Spinnaker for canary rollout deployment.
🔴Monitoring: The monitoring metrics are centralized in Atlas, and Kayenta is used to detect anomalies.
🔴Incident report: Incidents are dispatched according to priority, and PagerDuty is used for incident handling.
#systemdesign #coding #interviewtips
.
➖➖➖➖➖➖➖➖➖
🕊 @gopher_academy
🔴Planing: Netflix Engineering uses JIRA for planning and Confluence for documentation.
🔴Coding: Java is the primary programming language for the backend service, while other languages are used for different use cases.
🔴Build: Gradle is mainly used for building, and Gradle plugins are built to support various use cases.
🔴Packaging: Package and dependencies are packed into an Amazon Machine Image (AMI) for release.
🔴Testing: Testing emphasizes the production culture's focus on building chaos tools.
🔴Deployment: Netflix uses its self-built Spinnaker for canary rollout deployment.
🔴Monitoring: The monitoring metrics are centralized in Atlas, and Kayenta is used to detect anomalies.
🔴Incident report: Incidents are dispatched according to priority, and PagerDuty is used for incident handling.
#systemdesign #coding #interviewtips
.
➖➖➖➖➖➖➖➖➖
🕊 @gopher_academy
👍1🔥1🐳1👾1
How can Redis be used?
🔹Session
We can use Redis to share user session data among different services.
🔹Cache
We can use Redis to cache objects or pages, especially for hotspot data.
🔹Distributed lock
We can use a Redis string to acquire locks among distributed services.
🔹Counter
We can count how many likes or how many reads for articles.
🔹Rate limiter
We can apply a rate limiter for certain user IPs.
🔹Global ID generator
We can use Redis Int for global ID.
🔹Shopping cart
We can use Redis Hash to represent key-value pairs in a shopping cart.
🔹Calculate user retention
We can use Bitmap to represent the user login daily and calculate user retention.
🔹Message queue
We can use List for a message queue.
🔹Ranking
We can use ZSet to sort the articles.
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🕊 @gopher_academy
🔹Session
We can use Redis to share user session data among different services.
🔹Cache
We can use Redis to cache objects or pages, especially for hotspot data.
🔹Distributed lock
We can use a Redis string to acquire locks among distributed services.
🔹Counter
We can count how many likes or how many reads for articles.
🔹Rate limiter
We can apply a rate limiter for certain user IPs.
🔹Global ID generator
We can use Redis Int for global ID.
🔹Shopping cart
We can use Redis Hash to represent key-value pairs in a shopping cart.
🔹Calculate user retention
We can use Bitmap to represent the user login daily and calculate user retention.
🔹Message queue
We can use List for a message queue.
🔹Ranking
We can use ZSet to sort the articles.
#systemdesign #coding #interviewtips
➖➖➖➖➖➖➖➖➖
🕊 @gopher_academy
👍12👎1
𝗥𝗮𝗯𝗯𝗶𝘁𝗠𝗤 𝘃𝘀. 𝗞𝗮𝗳𝗸𝗮 𝘃𝘀. 𝗔𝗰𝘁𝗶𝘃𝗲𝗠𝗤: 𝟳 𝐊𝐞𝐲 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀
🔹𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Kafka is designed for high throughput and horizontal scalability, making it well-suited for handling large volumes of data. RabbitMQ and ActiveMQ both offer high performance, but Kafka generally outperforms them in terms of throughput, particularly in scenarios with high data volume.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝘆: RabbitMQ and ActiveMQ support message prioritization, allowing messages with higher priority to be processed before those with lower priority. Kafka does not have built-in message priority support.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗢𝗿𝗱𝗲𝗿𝗶𝗻𝗴: RabbitMQ and ActiveMQ guarantee message ordering within a single queue or topic, respectively. Kafka ensures message ordering within a partition but not across partitions within a topic.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹: RabbitMQ uses a queue-based message model following the Advanced Message Queuing Protocol (AMQP), while Kafka utilizes a distributed log-based model. ActiveMQ is built on the Java Message Service (JMS) standard and also uses a queue-based message model.
🔹𝗗𝘂𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆: All three message brokers support durable messaging, ensuring that messages are not lost in case of failures. However, the mechanisms for achieving durability differ among the three, with RabbitMQ and ActiveMQ offering configurable durability options and Kafka providing built-in durability through log replication.
🔹𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻: RabbitMQ supports replication through Mirrored Queues, while Kafka features built-in partition replication. ActiveMQ uses a Primary-Replica replication mechanism.
🔹𝗦𝘁𝗿𝗲𝗮𝗺 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴: Kafka provides native stream processing capabilities through Kafka Streams, similarly RabbitMQ offers stream processing too, while ActiveMQ relies on third-party libraries for stream processing.
Ref:
✨ RabbitMQ vs. Kafka vs. ActiveMQ: A Battle of Messaging Brokers: https://lnkd.in/g4N7UCPE
✨ 𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀: https://lnkd.in/gtcCT-dJ
✨𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 for #systemdesign #interview questions - https://lnkd.in/giwyzfkT
✨𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 - https://lnkd.in/grPz6meZ
#distributedmessaging #kafka #rabbitmq #activemq #systemdesign #interviewpreparation #interviewtips #softwarearchitecture
👇👇👇👇
➖➖➖➖➖➖➖➖
🕊 @gopher_academy
🔹𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗦𝗰𝗮𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Kafka is designed for high throughput and horizontal scalability, making it well-suited for handling large volumes of data. RabbitMQ and ActiveMQ both offer high performance, but Kafka generally outperforms them in terms of throughput, particularly in scenarios with high data volume.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝘆: RabbitMQ and ActiveMQ support message prioritization, allowing messages with higher priority to be processed before those with lower priority. Kafka does not have built-in message priority support.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗢𝗿𝗱𝗲𝗿𝗶𝗻𝗴: RabbitMQ and ActiveMQ guarantee message ordering within a single queue or topic, respectively. Kafka ensures message ordering within a partition but not across partitions within a topic.
🔹𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹: RabbitMQ uses a queue-based message model following the Advanced Message Queuing Protocol (AMQP), while Kafka utilizes a distributed log-based model. ActiveMQ is built on the Java Message Service (JMS) standard and also uses a queue-based message model.
🔹𝗗𝘂𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆: All three message brokers support durable messaging, ensuring that messages are not lost in case of failures. However, the mechanisms for achieving durability differ among the three, with RabbitMQ and ActiveMQ offering configurable durability options and Kafka providing built-in durability through log replication.
🔹𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻: RabbitMQ supports replication through Mirrored Queues, while Kafka features built-in partition replication. ActiveMQ uses a Primary-Replica replication mechanism.
🔹𝗦𝘁𝗿𝗲𝗮𝗺 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴: Kafka provides native stream processing capabilities through Kafka Streams, similarly RabbitMQ offers stream processing too, while ActiveMQ relies on third-party libraries for stream processing.
Ref:
✨ RabbitMQ vs. Kafka vs. ActiveMQ: A Battle of Messaging Brokers: https://lnkd.in/g4N7UCPE
✨ 𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀: https://lnkd.in/gtcCT-dJ
✨𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 for #systemdesign #interview questions - https://lnkd.in/giwyzfkT
✨𝗚𝗿𝗼𝗸𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗦𝘆𝘀𝘁𝗲𝗺 𝗗𝗲𝘀𝗶𝗴𝗻 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 - https://lnkd.in/grPz6meZ
#distributedmessaging #kafka #rabbitmq #activemq #systemdesign #interviewpreparation #interviewtips #softwarearchitecture
👇👇👇👇
➖➖➖➖➖➖➖➖
🕊 @gopher_academy
lnkd.in
LinkedIn
This link will take you to a page that’s not on LinkedIn
👍11