Reddit Programming
207 subscribers
1.22K photos
123K links
I will send you newest post from subreddit /r/programming
Download Telegram
Decrypting Encrypted files from Akira Ransomware (Linux/ESXI variant 2024) using a bunch of GPUs -- "I recently helped a company recover their data from the Akira ransomware without paying the ransom. I’m sharing how I did it, along with the full source code."
https://www.reddit.com/r/programming/comments/1jf4r4s/decrypting_encrypted_files_from_akira_ransomware/

submitted by /u/throwaway16830261 (https://www.reddit.com/user/throwaway16830261)
[link] (https://tinyhack.com/2025/03/13/decrypting-encrypted-files-from-akira-ransomware-linux-esxi-variant-2024-using-a-bunch-of-gpus/) [comments] (https://www.reddit.com/r/programming/comments/1jf4r4s/decrypting_encrypted_files_from_akira_ransomware/)
Code Positioning System (CPS): Giving LLMs a GPS for Navigating Large Codebases
https://www.reddit.com/r/programming/comments/1jf7gxv/code_positioning_system_cps_giving_llms_a_gps_for/

<!-- SC_OFF -->Hey everyone! I've been working on a concept to address a major challenge I've encountered when using AI coding assistants like GitHub Copilot, Cody, and others: their struggle to understand and work effectively with large codebases. I'm calling it the Code Positioning System (CPS), and I'd love to get your feedback! (Note: This post was co-authored with assistance from Claude to help articulate the concepts clearly and comprehensively.) The Problem: LLMs Get Lost in Big Projects We've all seen how powerful LLMs can be for generating code snippets, autocompleting lines, and even writing entire functions. But throw them into a sprawling, multi-project solution, and they quickly become disoriented. They: Lose Context: Even with extended context windows, LLMs can't hold the entire structure of a large codebase in memory. Struggle to Navigate: They lack a systematic way to find relevant code, often relying on simple text retrieval that misses crucial relationships. Make Inconsistent Changes: Modifications in one part of the code might contradict design patterns or introduce bugs elsewhere. Fail to "See the Big Picture": They can't easily grasp the overall architecture or the high-level interactions between components. Existing tools try to mitigate this with techniques like retrieval-augmented generation, but they still treat code primarily as text, not as the interconnected, logical structure it truly is. The Solution: A "GPS for Code" Imagine if, instead of fumbling through files and folders, an LLM had a GPS system for navigating code. That's the core idea behind CPS. It provides: Hierarchical Abstraction Layers: Like zooming in and out on a map, CPS presents the codebase at different levels of detail: L1: System Architecture: Projects, namespaces, assemblies, and their high-level dependencies. (Think: country view) L2: Component Interfaces: Public APIs, interfaces, service contracts, and how components interact. (Think: state/province view) L3: Behavioral Summaries: Method signatures with concise descriptions of what each method does (pre/post conditions, exceptions). (Think: city view) L4: Implementation Details: The actual source code, local variables, and control flow. (Think: street view) Semantic Graph Representation: Code is stored not as text files, but as a graph of interconnected entities (classes, methods, properties, variables) and their relationships (calls, inheritance, implementation, usage). This is key to moving beyond text-based processing. Navigation Engine: The LLM can use API calls to "move" through the code: drillDown**:** Go from L1 to L2, L2 to L3, etc. zoomOut**:** Go from L4 to L3, L3 to L2, etc. moveTo**:** Jump directly to a specific entity (e.g., a class or method). follow**:** Trace a relationship (e.g., find all callers of a method). findPath**:** Discover the relationship path between two entities. back**:** Return to the previous location in the navigation history. Contextual Awareness: Like a GPS knows your current location, CPS maintains context: Current Focus: The entity (class, method, etc.) the LLM is currently examining. Current Layer: The abstraction level (L1-L4). Navigation History: A record of the LLM's exploration path. Structured Responses: Information is presented to the LLM in structured JSON format, making it easy to parse and understand. No more struggling with raw code snippets! Content Addressing: Every code entity has a unique, stable identifier based on its semantic content (type, namespace, name, signature). This means the ID remains the same even if the code is moved to a different file. How It Works (Technical Details) I'm planning to build the initial proof of concept in C# using Roslyn, the .NET Compiler Platform. Here's a simplified breakdown: Code Analysis
(Roslyn): Roslyn's MSBuildWorkspace loads entire solutions. The code is parsed into syntax trees and semantic models. SymbolExtractor classes pull out information about classes, methods, properties, etc. Relationships (calls, inheritance, etc.) are identified. Knowledge Graph Construction: A graph database (initially in-memory, later potentially Neo4j) stores the logical representation. Nodes: Represent code entities (classes, methods, etc.). Edges: Represent relationships (calls, inherits, implements, etc.). Properties: Store metadata (access modifiers, return types, documentation, etc.). Abstraction Layer Generation: Separate IAbstractionLayerProvider implementations (one for each layer) generate the different views: SystemArchitectureProvider (L1) extracts project dependencies, namespaces, and key components. ComponentInterfaceProvider (L2) extracts public APIs and component interactions. BehaviorSummaryProvider (L3) extracts method signatures and generates concise summaries (potentially using an LLM!). ImplementationDetailProvider (L4) provides the full source code and control flow information. Navigation Engine: A NavigationEngine class handles requests to move between layers and entities. It maintains session state (like a GPS remembers your route). It provides methods like DrillDown, ZoomOut, MoveTo, Follow, Back. LLM Interface (REST API): An ASP.NET (http://asp.net/) Core Web API exposes endpoints for the LLM to interact with CPS. Requests and responses are in structured JSON format. Example Request:{ "requestType": "navigation", "action": "drillDown", "target": "AuthService.Core.AuthenticationService.ValidateCredentials" } Example Response:{ "viewType": "implementationView", "id": "impl-001", "methodId": "method-001", "source": "public bool ValidateCredentials(string username, string password) { ... }", "navigationOptions": { "zoomOut": "method-001", "related": ["method-003", "method-004"] } } Bidirectional Mapping: Changes made in the logical representation can be translated back into source code modifications, and vice versa. Example Interaction: Let's say an LLM is tasked with debugging a null reference exception in a login process. Here's how it might use CPS: LLM: "Show me the system architecture." (Request to CPS) CPS: (Responds with L1 view - projects, namespaces, dependencies) LLM: "Drill down into the AuthService project." CPS: (Responds with L2 view - classes and interfaces in AuthService) LLM: "Show me the AuthenticationService class." CPS: (Responds with L2 view - public API of AuthenticationService) LLM: "Show me the behavior of the ValidateCredentials method." CPS: (Responds with L3 view - signature, parameters, behavior summary) LLM: "Show me the implementation of ValidateCredentials." CPS: (Responds with L4 view - full source code) LLM: "What methods call ValidateCredentials?" CPS: (Responds with a list of callers and their context) LLM: "Follow the call from LoginController.Login." CPS: (Moves focus to the LoginController.Login method, maintaining context) ...and so on. The LLM can seamlessly navigate up and down the abstraction layers and follow relationships, all while CPS keeps track of its "location" and provides structured information. Why This is Different (and Potentially Revolutionary): Logical vs. Textual: CPS treats code as a logical structure, not just a collection of text files. This is a fundamental shift. Abstraction Layers: The ability to "zoom in" and "zoom out" is crucial for managing complexity. Navigation, Not Just Retrieval: CPS provides active navigation, not just passive retrieval of related code. Context Preservation: The session-based approach maintains context, making multi-step reasoning possible. Use Cases Beyond Debugging: Autonomous Code Generation: LLMs could build entire features across multiple components. Refactoring and Modernization: Large-scale code transformations become easier. Code Understanding and Documentation: CPS could be used by human
developers, too! Security Audits: Tracing data flow and identifying vulnerabilities. Questions for the Community: What are your initial thoughts on this concept? Does the "GPS for code" analogy resonate? What potential challenges or limitations do you foresee? Are there any existing tools or research projects that I should be aware of that are similar? What features would be most valuable to you as a developer? Would anyone be interested in collaborating on this? I am planning on opensourcing this. Next Steps: I'll be starting on a basic proof of concept in C# with Roslyn soon. I am going to have to take a break for about 6 weeks, after that, I plan to share the initial prototype on GitHub and continue development. Thanks for reading this (very) long post! I'm excited to hear your feedback and discuss this further. <!-- SC_ON --> submitted by /u/n1c39uy (https://www.reddit.com/user/n1c39uy)
[link] (https://www.reddit.com/r/ChatGPTCoding/comments/1jf4mgo/comment/miod13l/?context=3) [comments] (https://www.reddit.com/r/programming/comments/1jf7gxv/code_positioning_system_cps_giving_llms_a_gps_for/)
This is “vibe coding”, right?
https://www.reddit.com/r/programming/comments/1jfezif/this_is_vibe_coding_right/

<!-- SC_OFF -->I created project before vibe coding was a thing where I think I was vibe coding. Is this what vibe coding is? I have strong programming, networking, system administration, data modeling, and authentication skills/knowledge but my HTML/CSS/JS/PHP knowledge weren’t enough to complete the project to a satisfactory level in a reasonable time without using AI. While I enjoyed the process, it’s was a very different experience to normal programming or even programming with some AI assistance. It required in-depth knowledge of all aspects of the project to know the pieces it required, be able to direct the AI to create the right pieces, and to be able to put those pieces together to create a working final product. Doing this allowed me to complete the project of ~1300 lines of code in about 15 hours of nonstop work. I only wrote a handful of those lines, made slight modifications when needed that weren’t big or complicated enough to use AI, provided required data models (data modeling), provided design direction, and made abstraction to allow the code to be made public (extract config values and secrets to be loaded on runtime). I also needed adequate knowledge of Discord’s API. I have created a GitHub repository that hosts the code for this project. The readme was written by me and details the project. The full ChatGPT chat to create the project is also linked at the bottom of the readme. Let me know your thoughts on whether this is “vibe coding” or not. Some of the TODO entries also show some of the issues with creating a project like this. Below is more context around the creation of the project and a summary of what it does. Note that you won’t be able to use the code exactly as it depends on my university’s authentication method to assign nicknames. This would have to be removed before it would work properly. Essentially, my college uses a discord server as the main communication for a course. Each user would have to join the server, send a message with their full name, and their professor’s name. An instructor or teaching assistant would then change their nickname and apply the appropriate roles to give access to the rest of the server and their section’s channels. The final product is a decent UI/UX experience that allows professors to create their own invite links that they can distribute to their section. The link sends them to the webpage, requiring the college’s authentication, retrieving their full name, joins them to the server, then changes their nickname and adds the appropriate roles defined when the invite was created. It also contains a link to add the application to the discord server as well as a page to manage and view active invite links. <!-- SC_ON --> submitted by /u/Acherons_ (https://www.reddit.com/user/Acherons_)
[link] (https://github.com/NeonixRIT/gcisdiscordverify) [comments] (https://www.reddit.com/r/programming/comments/1jfezif/this_is_vibe_coding_right/)
Mnemosyne: a Java cache library
https://www.reddit.com/r/programming/comments/1jfjyug/mnemosyne_a_java_cache_library/

<!-- SC_OFF -->Hello everyone! I had been working on a cache-library for a while, and I wanted to share the results with you. Mnemosyne works with spring-based applications so far, but a Quarkus integration is coming soon. There is one thing that makes this cache-library somewhat special: it uses a Value Pool for all cached object types so multiple caches can be updated at the same time by just a single update. Implementations of LRU and FIFO are provided, but the users are able (and indeed encouraged) to implement their domain-specific eviction algorithms by extending AbstractMnemosyneCache and implementing its' abstract methods. I haven't yet crash-tested it by having e.g. hundreds of threads reading and writing on it concurrently, but it seems to work as intented for up to several threads. There are several TODOs before making mnemosyne trustworthy for production environments, so feel welcome to contribute if you want to. <!-- SC_ON --> submitted by /u/lonew0lf-G (https://www.reddit.com/user/lonew0lf-G)
[link] (https://github.com/malandrakisgeo/mnemosyne) [comments] (https://www.reddit.com/r/programming/comments/1jfjyug/mnemosyne_a_java_cache_library/)
FaunaDB is shutting down! Here are 3 open source alternatives to switch to
https://www.reddit.com/r/programming/comments/1jflorc/faunadb_is_shutting_down_here_are_3_open_source/

<!-- SC_OFF -->Hi, In their recent announcement (https://fauna.com/blog/the-future-of-fauna), Fauna team revealed they'll be shutting down the service on May 30, 2025. The team is committed to open sourcing the technology, so that's great. Love that recent trend where companies share the code after they've shut down the service (eg. Maybe (https://openalternative.co/maybe), Campfire and now Fauna). If you're affected by this and don't want to wait for them to release the code, I've compiled some of the best open-source alternatives to FaunaDB: https://openalternative.co/alternatives/fauna This is by no means a complete list, so if you know of any solid alternatives that aren't included, please let me know. Thanks! <!-- SC_ON --> submitted by /u/piotrkulpinski (https://www.reddit.com/user/piotrkulpinski)
[link] (https://openalternative.co/alternatives/fauna) [comments] (https://www.reddit.com/r/programming/comments/1jflorc/faunadb_is_shutting_down_here_are_3_open_source/)
A comparison of ecosystems in Big Tech vs The Real World
https://www.reddit.com/r/programming/comments/1jfq5x3/a_comparison_of_ecosystems_in_big_tech_vs_the/

<!-- SC_OFF -->I wrote up a post about my experiences coming back from a long, long journey as an engineer in Big Tech (Google, specifically), back to the "real world". What's it like developing after almost two decades away? How does the freedom of the real world compare with the control and mandatory migrations of Big Tech, and what are the outcomes of that? This is perhaps the first of several posts looking at several aspects of the developer experience in and out of Big Tech. If there's anything you'd like to hear more about, I'm happy to write it, either here or in a subsequent article! <!-- SC_ON --> submitted by /u/ahyatt (https://www.reddit.com/user/ahyatt)
[link] (https://substack.com/home/post/p-159443385) [comments] (https://www.reddit.com/r/programming/comments/1jfq5x3/a_comparison_of_ecosystems_in_big_tech_vs_the/)