Reddit Programming
209 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
A systematic framework to eliminate all UB from C++
https://www.reddit.com/r/programming/comments/1ps0m3w/a_systematic_framework_to_eliminate_all_ub_from_c/

<!-- SC_OFF -->This is a high-level interesting on-going paper about how C++ plans to improve safety. This includes strategies: feature removal refined behaviour erroneous behaviour insertion of runtime checks language subsetting (via profiles, probably) the introduction of annotations the introduction of entirely new language features The paper takes into account that C++ is a language that should keep compiling with older code but should do it with newer code in a safer way (via opt-ins/outs). <!-- SC_ON --> submitted by /u/germandiago (https://www.reddit.com/user/germandiago)
[link] (https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3100r5.pdf) [comments] (https://www.reddit.com/r/programming/comments/1ps0m3w/a_systematic_framework_to_eliminate_all_ub_from_c/)
PatchworkOS Is a From-Scratch OS That Follows 'Everything Is a File' More Strictly than UNIX: An Overview of Sockets, Spawning Processes, and Notes (Signals)
https://www.reddit.com/r/programming/comments/1ps3he5/patchworkos_is_a_fromscratch_os_that_follows/

<!-- SC_OFF -->PatchworkOS (https://github.com/KaiNorberg/PatchworkOS) strictly follows the "everything is a file" philosophy in a way inspired by Plan9, this can often result in unorthodox APIs that seem overcomplicated at first, but the goal is to provide a simple, consistent and most importantly composable interface for all kernel subsystems, more on this later. Included below are some examples to familiarize yourself with the concept. We, of course, cannot cover everything, so the concepts presented here are the ones believed to provide the greatest insight into the philosophy. Sockets The first example is sockets, specifically how to create and use local seqpacket sockets. To create a local seqpacket socket, you open the /net/local/seqpacket file. This is equivalent to calling socket(AF_LOCAL, SOCK_SEQPACKET, 0) in POSIX systems. The opened file can be read to return the "ID" of the newly created socket which is a string that uniquely identifies the socket, more on this later. PatchworkOS provides several helper functions to make file operations easier, but first we will show how to do it without any helpers: fd_t fd = open("/net/local/seqpacket"); char id[32] = {0}; read(fd, id, 31); // ... do stuff ... close(fd); Using the sread() helper which reads a null-terminated string from a file descriptor, we can simplify this to: fd_t fd = open("/net/local/seqpacket"); char* id = sread(fd); close(fd); // ... do stuff ... free(id); Finally, using use the sreadfile() helper which reads a null-terminated string from a file from its path, we can simplify this even further to: char* id = sreadfile("/net/local/seqpacket"); // ... do stuff ... free(id); Note that the socket will persist until the process that created it and all its children have exited. Additionally, for error handling, all functions will return either NULL or ERR on failure, depending on if they return a pointer or an integer type respectively. The per-thread errno variable is used to indicate the specific error that occurred, both in user space and kernel space (however the actual variable is implemented differently in kernel space). Now that we have the ID, we can discuss what it actually is. The ID is the name of a directory in the /net/local directory, in which the following files exist: data: Used to send and retrieve data ctl: Used to send commands accept: Used to accept incoming connections So, for example, the sockets data file is located at /net/local/[id]/data. Say we want to make our socket into a server, we would then use the ctl file to send the bind and listen commands, this is similar to calling bind() and listen() in POSIX systems. In this case, we want to bind the server to the name myserver. Once again, we provide several helper functions to make this easier. First, without any helpers: char ctlPath[MAX_PATH] = {0}; snprintf(ctlPath, MAX_PATH, "/net/local/%s/ctl", id) fd_t ctl = open(ctlPath); const char* str = "bind myserver && listen"; // Note the use of && to send multiple commands. write(ctl, str, strlen(str)); close(ctl); Using the F() macro which allocates formatted strings on the stack and the swrite() helper that writes a null-terminated string to a file descriptor: fd_t ctl = open(F("/net/local/%s/ctl", id)); swrite(ctl, "bind myserver && listen") close(ctl); Finally, using the swritefile() helper which writes a null-terminated string to a file from its path: swritefile(F("/net/local/%s/ctl", id), "bind myserver && listen"); If we wanted to accept a connection using our newly created server, we just open its accept file: fd_t fd = open(F("/net/local/%s/accept", id)); /// ... do stuff ... close(fd); The file descriptor returned when the accept file is opened can be used to
send and receive data, just like when calling accept() in POSIX systems. For the sake of completeness, to connect the server we just create a new socket and use the connect command: char* id = sreadfile("/net/local/seqpacket"); swritefile(F("/net/local/%s/ctl", id), "connect myserver"); free(id); Documentation (https://kainorberg.github.io/PatchworkOS/html/df/d65/group__module__net.html) File Flags? You may have noticed that in the above section sections the open() function does not take in a flags argument. This is because flags are directly part of the file path so to create a non-blocking socket: open("/net/local/seqpacket:nonblock"); Multiple flags are allowed, just separate them with the : character, this means flags can be easily appended to a path using the F() macro. Each flag also has a shorthand version for which the : character is omitted, for example to open a file as create and exclusive, you can do open("/some/path:create:exclusive"); or open("/some/path:ce"); For a full list of available flags, check the Documentation (https://kainorberg.github.io/PatchworkOS/html/dd/de3/group__kernel__fs__path.html). Permissions? Permissions are also specified using file paths there are three possible permissions, read, write and execute. For example to open a file as read and write, you can do open("/some/path:read:write"); or open("/some/path:rw"); Permissions are inherited, you can't use a file with lower permissions to get a file with higher permissions. Consider the namespace section, if a directory was opened using only read permissions and that same directory was bound, then it would be impossible to open any files within that directory with any permissions other than read. For a full list of available permissions, check the Documentation (https://kainorberg.github.io/PatchworkOS/html/dd/de3/group__kernel__fs__path.html). Spawning Processes Another example of the "everything is a file" philosophy is the spawn() syscall used to create new processes. We will skip the usual debate on fork() vs spawn() and just focus on how spawn() works in PatchworkOS as there are enough discussions about that online. The spawn() syscall takes in two arguments: const char** argv: The argument vector, similar to POSIX systems except that the first argument is always the path to the executable. spawn_flags_t flags: Flags controlling the creation of the new process, primarily what to inherit from the parent process. The system call may seem very small in comparison to, for example, posix_spawn() or CreateProcess(). This is intentional, trying to squeeze every possible combination of things one might want to do when creating a new process into a single syscall would be highly impractical, as those familiar with CreateProcess() may know. PatchworkOS instead allows the creation of processes in a suspended state, allowing the parent process to modify the child process before it starts executing. As an example, let's say we wish to create a child such that its stdio is redirected to some file descriptors in the parent and create an environment variable MY_VAR=my_value. First, let's pretend we have some set of file descriptors and spawn the new process in a suspended state using the SPAWN_SUSPENDED flag fd_t stdin = ...; fd_t stdout = ...; fd_t stderr = ...; const char* argv[] = {"/bin/shell", NULL}; pid_t child = spawn(argv, SPAWN_SUSPENDED); At this point, the process exists but its stuck blocking before it is can load its executable. Additionally, the child process has inherited all file descriptors and environment variables from the parent process. Now we can redirect the stdio file descriptors in the child process using the /proc/[pid]/ctl file, which just like the socket ctl file, allows us to send commands to control the process. In this case, we want to use two commands, dup2 to redirect the stdio file descriptors and close to close the unneeded file descriptors. swritefile(F("/proc/%d/ctl", child), F("dup2 %d 0 && dup2 %d 1 &&
dup2 %d 2 && close 3 -1", stdin, stdout, stderr)); Note that close can either take one or two arguments. When two arguments are provided, it closes all file descriptors in the specified range. In our case -1 causes a underflow to the maximum file descriptor value, closing all file descriptors higher than or equal to the first argument. Next, we create the environment variable by creating a file in the child's /proc/[pid]/env/ directory: swritefile(F("/proc/%d/env/MY_VAR:create", child), "my_value"); Finally, we can start the child process using the start command: swritefile(F("/proc/%d/ctl", child), "start"); At this point the child process will begin executing with its stdio redirected to the specified file descriptors and the environment variable set as expected. The advantages of this approach are numerous, we avoid COW issues with fork(), weirdness with vfork(), system call bloat with CreateProcess(), and we get a very flexible and powerful process creation system that can use any of the other file based APIs to modify the child process. In exchange, the only real price we pay is overhead from additional context switches, string parsing and path traversals, how much this matters in practice is debatable. For more on spawn(), check the Userspace Process API Documentation (https://kainorberg.github.io/PatchworkOS/html/d1/d10/group__libstd__sys__proc.html#gae41c1cb67e3bc823c6d0018e043022eb) and for more information on the /proc filesystem, check the Kernel Process Documentation (https://kainorberg.github.io/PatchworkOS/html/da/d0f/group__kernel__proc.html). Notes (Signals) The next feature to discuss is the "notes" system. Notes are PatchworkOS's equivalent to POSIX signals which asynchronously send strings to processes. We will skip how to send and receive notes along with details like process groups (check the docs for that), instead focusing on the biggest advantage of the notes system, additional information. Let's take an example. Say we are debugging a segmentation fault in a program, which is a rather common scenario. In a usual POSIX environment, we might be told "Segmentation fault (core dumped)" or even worse "SIGSEGV", which is not very helpful. The core limitation is that signals are just integers, so we can't provide any additional information. In PatchworkOS, a note is a string where the first word of the string is the note type and the rest is arbitrary data. So in our segmentation fault example, the shell might produce output like: shell: pagefault at 0x40013b due to stack overflow at 0x7ffffff9af18 Note that the output provided is from the "stackoverflow" program which intentionally causes a stack overflow through recursion. All that happened is that the shell printed the exit status of the process, which is also a string and in this case is set to the note that killed the process. This is much more useful, we know the exact address and the reason for the fault. For more details, see the Notes Documentation (https://kainorberg.github.io/PatchworkOS/html/d8/db1/group__kernel__ipc__note.html), Standard Library Process Documentation (https://kainorberg.github.io/PatchworkOS/html/d1/d10/group__libstd__sys__proc.html) and the Kernel Process Documentation (https://kainorberg.github.io/PatchworkOS/html/da/d0f/group__kernel__proc.html). But why? I'm sure you have heard many an argument for and against the "everything is a file" philosophy. So I won't go over everything, but the primary reason for using it in PatchworkOS is "emergent behavior" or "composability" whichever term you prefer. Take the spawn() example, notice how there is no specialized system for setting up a child after it's been created? Instead, we have a set of small, simple building blocks that when added together form a more complex whole. That is emergent behavior, by keeping things simple and most importantly composable, we can create very complex behavior without needing to explicitly design it. Let's take another example, say you
wanted to wait on multiple processes with a waitpid() syscall. Well, that's not possible. So now we suddenly need a new system call. Meanwhile, in an "everything is a file system" we just have a pollable /proc/[pid]/wait file that blocks until the process dies and returns the exit status, now any behavior that can be implemented with poll() can be used while waiting on processes, including waiting on multiple processes at once, waiting on a keyboard and a process, waiting with a timeout, or any weird combination you can think of. Plus its fun. <!-- SC_ON --> submitted by /u/KN_9296 (https://www.reddit.com/user/KN_9296)
[link] (https://github.com/KaiNorberg/PatchworkOS) [comments] (https://www.reddit.com/r/programming/comments/1ps3he5/patchworkos_is_a_fromscratch_os_that_follows/)
Follow-up: Load testing my polyglot microservices game - Results and what I learned with k6 [Case Study, Open Source]
https://www.reddit.com/r/programming/comments/1ps7duq/followup_load_testing_my_polyglot_microservices/

<!-- SC_OFF -->Some time ago, I shared my polyglot Codenames custom version here - a multiplayer game built with Java (Spring Boot), Rust (Actix), and C# (ASP.NET Core SignalR). Some asked about performance characteristics across the different stacks. I finally added proper load testing with k6. Here are the results. The Setup Services tested (Docker containers, local machine): Account Service - Java 25 + Spring Boot 4 + WebFlux Game Service - Rust + Actix-web Chat Service - .NET 10 + SignalR Test scenarios: Smoke tests (baseline, 1 VU) Load tests (10 concurrent users, 6m30s ramp) SignalR real-time chat (2 concurrent sessions) Game WebSocket (3 concurrent sessions) Results Service Endpoint p95 Latency Account (Java) Login 64ms Account (Java) Register 138ms Game (Rust) Create game 15ms Game (Rust) Join game 4ms Game (Rust) WS Connect 4ms Chat (.NET) WS Connect 37ms Load test (10 VUs sustained): 1,411 complete user flows 8,469 HTTP requests 21.68 req/s throughput 63ms p95 response time 0% error rate SignalR Chat test (.NET): 84 messages sent, 178 received 37ms p95 connection time 100% message delivery Game WebSocket test (Rust/Actix): 90 messages sent, 75 received 4ms p95 connection time 45 WebSocket sessions 100% success rate What I learned Rust is fast, but the gap is smaller than expected. The Game service (Rust) responds in 4-15ms, while Account (Java with WebFlux) sits at 64-138ms. That's about 10x difference, but both are well under any reasonable SLA. For a hobby project, Java's developer experience wins. SignalR just works. I expected WebSocket testing to be painful. The k6 implementation required a custom SignalR client, but once working the .NET service handled real-time messaging flawlessly. WebFlux handles the load. Spring Boot 4 + WebFlux on Java 25 handles concurrent requests efficiently with its reactive/non-blocking model. The polyglot tax is real but manageable. Three different build systems, three deployment configs, three ways to handle JSON. But each service plays to its language's strengths. The SignalR client implements the JSON protocol handshake, message framing and hub invocation (basically what the official client does, but for k6). The Game WebSocket client is simpler, native WebSocket with JSON messages for join/leave/gameplay actions. What's next Test against GCP Cloud Run (cold starts, auto-scaling) Stress testing to find breaking points Add Gatling for comparison <!-- SC_ON --> submitted by /u/Lightforce_ (https://www.reddit.com/user/Lightforce_)
[link] (https://gitlab.com/RobinTrassard/codenames-microservices/-/tree/account-java-version) [comments] (https://www.reddit.com/r/programming/comments/1ps7duq/followup_load_testing_my_polyglot_microservices/)
Constvector: Log-structured std:vector alternative – 30-40% faster push/pop
https://www.reddit.com/r/programming/comments/1ps8s9e/constvector_logstructured_stdvector_alternative/

<!-- SC_OFF -->Usually std::vector starts with 'N' capacity and grows to '2 * N' capacity once its size crosses X; at that time, we also copy the data from the old array to the new array. That has few problems
1. Copy cost,
2. OS needs to manage the small capacity array (size N) that's freed by the application.
3. L1 and L2 cache need to invalidate the array items, since the array moved to new location, and CPU need to fetch to L1/L2 since it's new data for CPU, but in reality it's not. std::vector's reallocations and recopies are amortised O(1), but at low level they have lot of negative impact. Here's a log-structured alternative (constvector) with power-of-2 blocks: Push: 3.5 ns/op (vs 5 ns std::vector) Pop: 3.4 ns/op (vs 5.3 ns) Index: minor slowdown (3.8 vs 3.4 ns) Strict worst-case O(1), Θ(N) space trade-off, only log(N) extra compared to std::vector. It reduces internal memory fragmentation. It won't invalidate L1, L2 cache without modifications, hence improving performance: In the github I benchmarked for 1K to 1B size vectors and this consistently improved showed better performance for push and pop operations.

Youtube: https://youtu.be/ledS08GkD40 Practically we can use 64 size for meta array (for the log(N)) as extra space. I implemented the bare vector operations to compare, since the actual std::vector implementations have a lot of iterator validation code, causing the extra overhead. <!-- SC_ON --> submitted by /u/pilotwavetheory (https://www.reddit.com/user/pilotwavetheory)
[link] (https://github.com/tendulkar/) [comments] (https://www.reddit.com/r/programming/comments/1ps8s9e/constvector_logstructured_stdvector_alternative/)
Crunch: A Message Definition and Serialization Protocol for Getting Things Right
https://www.reddit.com/r/programming/comments/1ps9y9k/crunch_a_message_definition_and_serialization/

<!-- SC_OFF -->Crunch is a tool I developed using modern C++ for defining, serializing, and deserializing messages. Think along the domain of protobuf, flatbuffers, bebop, and mavLINK. I developed crunch to address some grievances I have with the interface design in these existing protocols. It has the following features:
1. Field and message level validation is required. What makes a field semantically correct in your program is baked into the C++ type system. The serialization format is a plugin. You can choose read/write speed optimized serialization, a protobuf-esque tag-length-value plugin, or write your own. Messages have integrity checks baked-in. CRC-16 or parity are shipped with Crunch, or you can write your own. No dynamic memory allocation. Using template magic, Crunch calculates the worst-case length for all message types, for all serialization protocols, and exposes a constexpr API to create a buffer for serialization and deserialization. I'm very happy with how it has turned out so far. I tried to make it super easy to use by providing bazel and cmake targets and extensive documentation. Future work involves automating cross-platform integration tests via QEMU, registering with as many package managers as I can, and creating bindings in other languages. Hopefully Crunch can be useful in your project! I have written the first in a series of blog posts about the development of Crunch linked in my profile if you're interested! <!-- SC_ON --> submitted by /u/volatile-int (https://www.reddit.com/user/volatile-int)
[link] (https://github.com/sam-w-yellin/crunch) [comments] (https://www.reddit.com/r/programming/comments/1ps9y9k/crunch_a_message_definition_and_serialization/)
Load Balancing Sounds Simple Until Traffic Actually Spikes. Here’s What People Get Wrong
https://www.reddit.com/r/programming/comments/1psbwq0/load_balancing_sounds_simple_until_traffic/

<!-- SC_OFF -->Load balancing is often described as “just spread traffic across servers,” but that definition collapses the moment real traffic shows up. The real failures happen when a backend is technically “healthy” but painfully slow, when sticky sessions quietly break stateful apps, or when retries and timeouts double your traffic without you noticing. At scale, load balancing stops being about distribution and starts being about failure management—health checks can lie, round-robin falls apart under uneven load, and autoscaling without the right balancing strategy just multiplies problems. This breakdown explains where textbook load balancing diverges from production reality, including L4 vs L7 trade-offs and why “even traffic” is often the wrong goal: Load Balancing (https://www.netcomlearning.com/blog/what-is-load-balancing) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/what-is-load-balancing) [comments] (https://www.reddit.com/r/programming/comments/1psbwq0/load_balancing_sounds_simple_until_traffic/)
Cloud Code Feels Magical Until You Realize What It’s Actually Abstracting Away
https://www.reddit.com/r/programming/comments/1pscjp2/cloud_code_feels_magical_until_you_realize_what/

<!-- SC_OFF -->Cloud Code looks like a productivity win on day one; deploy from your IDE, preview resources instantly, fewer YAML headaches. But the real value (and risk) is what it abstracts: IAM wiring, deployment context, environment drift, and the false sense that “local == prod.” Teams move faster, but without understanding what Cloud Code is generating and managing under the hood, debugging and scaling can get messy fast. This write-up breaks down where Cloud Code genuinely helps, where it can hide complexity, and how to use it without turning your IDE into a black box: Cloud Code (https://www.netcomlearning.com/blog/cloud-code) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/cloud-code) [comments] (https://www.reddit.com/r/programming/comments/1pscjp2/cloud_code_feels_magical_until_you_realize_what/)
AlloyDB for PostgreSQL: Familiar SQL, Very Unfamiliar Performance Characteristics
https://www.reddit.com/r/programming/comments/1psclu3/alloydb_for_postgresql_familiar_sql_very/

<!-- SC_OFF -->AlloyDB looks like “just Postgres on GCP” until you actually run real workloads on it. The surprises show up fast query performance that doesn’t behave like vanilla Postgres, storage and compute scaling that changes how you think about bottlenecks, and read pools that quietly reshape how apps should be architected. It’s powerful, but only if you understand what Google has modified under the hood and where it diverges from self-managed or Cloud SQL Postgres. This breakdown explains what AlloyDB optimizes, where it shines, and where assumptions from traditional Postgres can get you into trouble: AlloyDB (https://www.netcomlearning.com/blog/alloydb-for-postgresql) <!-- SC_ON --> submitted by /u/netcommah (https://www.reddit.com/user/netcommah)
[link] (https://www.netcomlearning.com/blog/alloydb-for-postgresql) [comments] (https://www.reddit.com/r/programming/comments/1psclu3/alloydb_for_postgresql_familiar_sql_very/)
A Git confusion I see a lot with junior devs: fetch vs pull
https://www.reddit.com/r/programming/comments/1psd3r3/a_git_confusion_i_see_a_lot_with_junior_devs/

<!-- SC_OFF -->I’ve seen quite a few junior devs get stuck when git pull suddenly throws conflicts, even though they “just wanted latest code”. I wrote a short explanation aimed at juniors that breaks down: what git fetch actually does why git pull behaves differently when the branch isn’t clean where git pull --rebase fits in No theory dump. Just real examples and mental models that helped my teams.
Sharing in case it helps someone avoid a confusing first Git conflict. <!-- SC_ON --> submitted by /u/sshetty03 (https://www.reddit.com/user/sshetty03)
[link] (https://medium.com/stackademic/the-real-difference-between-git-fetch-git-pull-and-git-pull-rebase-991514cb5bd6?sk=dd39ca5be91586de5ac83efe60075566) [comments] (https://www.reddit.com/r/programming/comments/1psd3r3/a_git_confusion_i_see_a_lot_with_junior_devs/)