Show HN: AnuDB– Backed on RocksDB, 279x Faster Than SQLite in Parallel Workloads
4 by hashmak_jsn | 0 comments on Hacker News.
We recently benchmarked AnuDB, a lightweight embedded database built on top of RocksDB, against SQLite on a Raspberry Pi. The performance difference, especially for parallel operations, was dramatic. GitHub Links: AnuDBBenchmark: https://ift.tt/8QsezbB AnuDB (Core): https://ift.tt/OebkDd5 Why Compare AnuDB and SQLite?SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required. AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi. Benchmark SetupPlatform: Raspberry Pi 2 (ARMv7) Benchmarked operations: Insert, Query, Update, Delete, Parallel AnuDB uses RocksDB and MsgPack serialization SQLite uses raw data, with WAL mode enabled for fairness Key ResultsInsert: AnuDB: 448 ops/sec SQLite: 838 ops/sec Query: AnuDB: 54 ops/sec SQLite: 30 ops/sec Update: AnuDB: 408 ops/sec SQLite: 600 ops/sec Delete: AnuDB: 555 ops/sec SQLite: 1942 ops/sec Parallel (10 threads): AnuDB: 412 ops/sec SQLite: 1.4 ops/sec (!) In the parallel case, AnuDB was over 279x faster than SQLite. Why the Huge Parallel Difference?SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios. RocksDB (used in AnuDB) supports: Fine-grained locking Concurrent readers/writers Better parallelism using LSM-tree architecture This explains why AnuDB significantly outperforms SQLite under threaded workloads. Try It YourselfClone the repo: git clone https://ift.tt/8QsezbB cd AnuDBBenchmark./build.sh /path/to/AnuDB /path/to/sqlite./benchmark Results are saved to benchmark_results.csv. When to Use AnuDBUse AnuDB if: You need embedded storage with high concurrency You’re dealing with telemetry, sensor data, or parallel workloads You want something lightweight and faster than SQLite under load Stick with SQLite if: You need SQL compatibility You value mature ecosystem/tooling Feedback WelcomeThis is an early experiment. We’re actively developing AnuDB and would love feedback: Is our benchmark fair? Where could we optimize further? Would this be useful in your embedded project?
4 by hashmak_jsn | 0 comments on Hacker News.
We recently benchmarked AnuDB, a lightweight embedded database built on top of RocksDB, against SQLite on a Raspberry Pi. The performance difference, especially for parallel operations, was dramatic. GitHub Links: AnuDBBenchmark: https://ift.tt/8QsezbB AnuDB (Core): https://ift.tt/OebkDd5 Why Compare AnuDB and SQLite?SQLite is excellent for many embedded use cases — it’s simple, battle-tested, and extremely reliable. But it doesn't scale well when parallelism or concurrent writes are required. AnuDB, built over RocksDB, offers better concurrency out of the box. We wanted to measure the practical differences using real benchmarks on a Raspberry Pi. Benchmark SetupPlatform: Raspberry Pi 2 (ARMv7) Benchmarked operations: Insert, Query, Update, Delete, Parallel AnuDB uses RocksDB and MsgPack serialization SQLite uses raw data, with WAL mode enabled for fairness Key ResultsInsert: AnuDB: 448 ops/sec SQLite: 838 ops/sec Query: AnuDB: 54 ops/sec SQLite: 30 ops/sec Update: AnuDB: 408 ops/sec SQLite: 600 ops/sec Delete: AnuDB: 555 ops/sec SQLite: 1942 ops/sec Parallel (10 threads): AnuDB: 412 ops/sec SQLite: 1.4 ops/sec (!) In the parallel case, AnuDB was over 279x faster than SQLite. Why the Huge Parallel Difference?SQLite, even with WAL mode, uses global database-level locks. It’s not designed for high-concurrency scenarios. RocksDB (used in AnuDB) supports: Fine-grained locking Concurrent readers/writers Better parallelism using LSM-tree architecture This explains why AnuDB significantly outperforms SQLite under threaded workloads. Try It YourselfClone the repo: git clone https://ift.tt/8QsezbB cd AnuDBBenchmark./build.sh /path/to/AnuDB /path/to/sqlite./benchmark Results are saved to benchmark_results.csv. When to Use AnuDBUse AnuDB if: You need embedded storage with high concurrency You’re dealing with telemetry, sensor data, or parallel workloads You want something lightweight and faster than SQLite under load Stick with SQLite if: You need SQL compatibility You value mature ecosystem/tooling Feedback WelcomeThis is an early experiment. We’re actively developing AnuDB and would love feedback: Is our benchmark fair? Where could we optimize further? Would this be useful in your embedded project?
Scientists taught parrots to video call each other, and the birds loved it
20 by michalpleban | 3 comments on Hacker News.
20 by michalpleban | 3 comments on Hacker News.
Oregon State University's Open Source Lab Is Running on Fumes
20 by doener | 0 comments on Hacker News.
20 by doener | 0 comments on Hacker News.
MTerrain: Optimized terrain system and editor for Godot
4 by klaussilveira | 0 comments on Hacker News.
4 by klaussilveira | 0 comments on Hacker News.
Show HN: Outpost – OSS infra for outbound webhooks and event destinations
26 by alexbouchard | 4 comments on Hacker News.
Hey HN, we're launching Outpost, an open-source, self-hostable component designed to handle outbound event delivery for SaaS/API platforms. If you're building a platform, you eventually need to send events to your users (think payment success, resource updates, workflow changes). Implementing this reliably—handling retries, monitoring, scaling, providing a decent dev experience for consumers, and managing tenants—becomes a significant, recurring engineering task that distracts from core product development. We built Outpost to offload that complexity. Outpost delivers events via traditional webhooks and also directly to event destinations like message queues and buses. While webhooks are ubiquitous, they have limitations at scale regarding cost, reliability patterns, and security posture. We observed platforms like Stripe, Shopify, and Twilio offering direct bus/queue integrations for these reasons—it's often cheaper and more resilient. It offers a better DX for consumers who prefer programmatic consumption. Outpost provides this flexibility out of the box as a core feature. Key features: - Multiple Delivery Methods: Webhooks + native Event Destinations (SQS, Kinesis, GCP Pub/Sub, RabbitMQ, Hookdeck, etc.). - Guaranteed Delivery: At-least-once guarantee with configurable automatic retries. - Observability: Built-in event log & OpenTelemetry support. - Management: API for destination (endpoint) management; optional User Portal for end-user self-service (debugging, destination management). - Multi-tenancy, topics, webhook security best practices (signatures, timestamps), etc. Given you most likely already have a system in place, Outpost is backward compatible with your existing payload format, HTTP headers, and signatures. It's written in Go and licensed under Apache 2.0. It's still early days, and we'd love your feedback – especially on the architecture, desired event destinations, or any rough edges you find. GitHub: https://ift.tt/IN4QjWm Docs: https://ift.tt/iCRlfaE Thanks for checking it out!
26 by alexbouchard | 4 comments on Hacker News.
Hey HN, we're launching Outpost, an open-source, self-hostable component designed to handle outbound event delivery for SaaS/API platforms. If you're building a platform, you eventually need to send events to your users (think payment success, resource updates, workflow changes). Implementing this reliably—handling retries, monitoring, scaling, providing a decent dev experience for consumers, and managing tenants—becomes a significant, recurring engineering task that distracts from core product development. We built Outpost to offload that complexity. Outpost delivers events via traditional webhooks and also directly to event destinations like message queues and buses. While webhooks are ubiquitous, they have limitations at scale regarding cost, reliability patterns, and security posture. We observed platforms like Stripe, Shopify, and Twilio offering direct bus/queue integrations for these reasons—it's often cheaper and more resilient. It offers a better DX for consumers who prefer programmatic consumption. Outpost provides this flexibility out of the box as a core feature. Key features: - Multiple Delivery Methods: Webhooks + native Event Destinations (SQS, Kinesis, GCP Pub/Sub, RabbitMQ, Hookdeck, etc.). - Guaranteed Delivery: At-least-once guarantee with configurable automatic retries. - Observability: Built-in event log & OpenTelemetry support. - Management: API for destination (endpoint) management; optional User Portal for end-user self-service (debugging, destination management). - Multi-tenancy, topics, webhook security best practices (signatures, timestamps), etc. Given you most likely already have a system in place, Outpost is backward compatible with your existing payload format, HTTP headers, and signatures. It's written in Go and licensed under Apache 2.0. It's still early days, and we'd love your feedback – especially on the architecture, desired event destinations, or any rough edges you find. GitHub: https://ift.tt/IN4QjWm Docs: https://ift.tt/iCRlfaE Thanks for checking it out!
Accents in Latent Spaces: How AI Hears Accent Strength in English
45 by ilyausorov | 6 comments on Hacker News.
45 by ilyausorov | 6 comments on Hacker News.
Gemini 2.5 Pro Preview: even better coding performance
17 by meetpateltech | 2 comments on Hacker News.
17 by meetpateltech | 2 comments on Hacker News.
Show HN: Plexe – ML Models from a Prompt
22 by vaibhavdubey97 | 7 comments on Hacker News.
Hey HN! We’re Vaibhav and Marcello. We’re building Plexe ( https://ift.tt/jaCUvz4 ), an open-source agent that turns natural language task descriptions into trained ML models. Here’s a video walkthrough: https://www.youtube.com/watch?v=bUwCSglhcXY . There are all kinds of uses for ML models that never get realized because the process of making them is messy and convoluted. You can spend months trying to find the data, clean it, experiment with models and deploy to production, only to find out that your project has been binned for taking so long. There are many tools for “automating” ML, but it still takes teams of ML experts to actually productionize something of value. And we can’t keep throwing LLMs at every ML problem. Why use a generic 10B parameter language model, if a logistic regression trained on your data could do the job better? Our light-bulb moment was that we could use LLMs to generate task-specific ML models that would be trained on one’s own data. Thanks to the emergent reasoning ability of LLMs, it is now possible to create an agentic system that might automate most of the ML lifecycle. A couple of months ago, we started developing a Python library that would let you define ML models on structured data using a description of the expected behaviour. Our initial implementation arranged potential solutions into a graph, using LLMs to write plans, implement them as code, and run the resulting training script. Using simple search algorithms, the system traversed the solution space to identify and package the best model. However, we ran into several limitations, as the algorithm proved brittle under edge cases, and we kept having to put patches for every minor issue in the training process. We decided to rethink the approach, throw everything out, and rebuild the tool using an agentic approach prioritising generality and flexibility. What started as a single ML engineering agent turned into an agentic ML "team", with all experiments tracked and logged using MLFlow. Our current implementation uses the smolagents library to define an agent hierarchy. We mapped the functionality of our previous implementation to a set of specialized agents, such as an “ML scientist” that proposes solution plans, and so on. Each agent has specialized tools, instructions, and prompt templates. To facilitate cross-agent communication, we implemented a shared memory that enables objects (datasets, code snippets, etc) to be passed across agents indirectly by referencing keys in a registry. You can find a detailed write-up on how it works here: https://ift.tt/BO28Fd4... Plexe’s early release is focused on predictive problems over structured data, and can be used to build models such as forecasting player injury risk in high-intensity sports, product recommendations for an e-commerce marketplace, or predicting technical indicators for algorithmic trading. Here are some examples to get you started: https://ift.tt/fiHQVhJ To get it working on your data, you can dump any CSV, parquet, etc and Plexe uses what it needs from your dataset to figure out what features it should use. In the open-source tool, it only supports adding files right now but in our platform version, we'll have support for integrating with Postgres where it pulls all available data based on an SQL query and dumps it into a parquet file for the agent to build models. Next up, we’ll be tackling more of the ML project lifecycle: we’re currently working on adding a “feature engineering agent” that focuses on the complex data transformations that are often required for data to be ready for model training. If you're interested, check Plexe out and let us know your thoughts!
22 by vaibhavdubey97 | 7 comments on Hacker News.
Hey HN! We’re Vaibhav and Marcello. We’re building Plexe ( https://ift.tt/jaCUvz4 ), an open-source agent that turns natural language task descriptions into trained ML models. Here’s a video walkthrough: https://www.youtube.com/watch?v=bUwCSglhcXY . There are all kinds of uses for ML models that never get realized because the process of making them is messy and convoluted. You can spend months trying to find the data, clean it, experiment with models and deploy to production, only to find out that your project has been binned for taking so long. There are many tools for “automating” ML, but it still takes teams of ML experts to actually productionize something of value. And we can’t keep throwing LLMs at every ML problem. Why use a generic 10B parameter language model, if a logistic regression trained on your data could do the job better? Our light-bulb moment was that we could use LLMs to generate task-specific ML models that would be trained on one’s own data. Thanks to the emergent reasoning ability of LLMs, it is now possible to create an agentic system that might automate most of the ML lifecycle. A couple of months ago, we started developing a Python library that would let you define ML models on structured data using a description of the expected behaviour. Our initial implementation arranged potential solutions into a graph, using LLMs to write plans, implement them as code, and run the resulting training script. Using simple search algorithms, the system traversed the solution space to identify and package the best model. However, we ran into several limitations, as the algorithm proved brittle under edge cases, and we kept having to put patches for every minor issue in the training process. We decided to rethink the approach, throw everything out, and rebuild the tool using an agentic approach prioritising generality and flexibility. What started as a single ML engineering agent turned into an agentic ML "team", with all experiments tracked and logged using MLFlow. Our current implementation uses the smolagents library to define an agent hierarchy. We mapped the functionality of our previous implementation to a set of specialized agents, such as an “ML scientist” that proposes solution plans, and so on. Each agent has specialized tools, instructions, and prompt templates. To facilitate cross-agent communication, we implemented a shared memory that enables objects (datasets, code snippets, etc) to be passed across agents indirectly by referencing keys in a registry. You can find a detailed write-up on how it works here: https://ift.tt/BO28Fd4... Plexe’s early release is focused on predictive problems over structured data, and can be used to build models such as forecasting player injury risk in high-intensity sports, product recommendations for an e-commerce marketplace, or predicting technical indicators for algorithmic trading. Here are some examples to get you started: https://ift.tt/fiHQVhJ To get it working on your data, you can dump any CSV, parquet, etc and Plexe uses what it needs from your dataset to figure out what features it should use. In the open-source tool, it only supports adding files right now but in our platform version, we'll have support for integrating with Postgres where it pulls all available data based on an SQL query and dumps it into a parquet file for the agent to build models. Next up, we’ll be tackling more of the ML project lifecycle: we’re currently working on adding a “feature engineering agent” that focuses on the complex data transformations that are often required for data to be ready for model training. If you're interested, check Plexe out and let us know your thoughts!
Show HN: Sheet Music in Smart Glasses
25 by kevinlinxc | 2 comments on Hacker News.
Hi everyone, my name is Kevin Lin, and this is a Show HN for my sheet music smart glasses project. My video was on the front page on Friday: https://ift.tt/eHE2pSO , but dang said we should do a Show HN as well, so here goes! I’ve wanted to put sheet music into smart glasses for a long time, but the perfect opportunity to execute came in mid-February, when Mentra (YC W25) tweeted about a smart glasses hackathon they were hosting - winners would get to take home a pair. I went, had a blast making a bunch of music-related apps with my teammate, and we won, so I got to take them home, refine the project, and make a pretty cool video about it ( https://www.youtube.com/watch?v=j36u2i7PKKE ). The glasses are Even Realities G1s. They look normal, but they have two microphones, a screen in each lens, and can be even made with a prescription. Every person I’ve met who tried them on was surprised at how good the display is, and the video recordings of them unfortunately don’t do them justice. The software runs on AugmentOS, which is Mentra’s smart glasses operating system that works on various 3rd-party smart glasses, including the G1s. All I had to do to make an app was write and run a typescript file using the AugmentOS SDK. This gives you the voice transcription and raw audio as input, and text or bitmaps available as output to the screens, everything else is completely abstracted away. Your glasses communicate with an AugmentOS app, and then the app communicates with your typescript service. The only hard part was creating a Python script to turn sheet music (MusicXML format) into small, optimized bitmaps to display on the screens. To start, the existing landscape of music-related Python libraries is pretty poorly documented and I ran into multiple never-before-seen error messages. Downscaling to the small size of the glasses screens also meant that stems and staff lines were disappearing, so I thought to use morphological dilation to emphasize those without making the notes unintelligible. The final pipeline was MusicXML -> music21 library to render chunks of bars to png -> dilate with opencv- > downscale -> convert to bitmap with Pillow -> optimize bitmaps with imagemagick. This is far from the best code I’ve ever written, but the LLMs attempt at this whole task was abysmal and my years of Python experience really got to shine here. The code is on GitHub: https://ift.tt/P7Ftc9b . Putting it together, my typescript service serves these bitmaps locally when requested. I put together a UI where I can navigate menus and sheet music with voice commands (e.g. show catalog, next, select, start, exit, pause) and then I connected foot pedals to my laptop. Because of bitmap sending latency (~3s right now, but future glasses will do better), using foot pedals to turn the bars while playing wasn’t viable, so I instead had one of my pedals toggle autoscrolling, and the other two pedals sped up/temporarily paused the scrolling. After lots of adjustments, I was able to play a full song using just the glasses! It took many takes and there was definitely lots of room for improvement. For example: - Bitmap sending is pretty slow, which is why using the foot pedals to turn bars wasn’t viable. - The resolution is pretty small, I would love to put more bars in at once so I can flip less frequently. - Since foot pedals aren’t portable, it would be cool to have a mode where the audio dictates when the sheet music changes. I tried implementing that with FFT but it was often wrong and more effort is needed. Head tilt controls would be cool too, because full manual control is a hard requirement for practicing. All of these pain points are being targeted by Mentra and other companies competing in the space, and so I’m super excited to see the next generation! Also, feel free to ask me anything!
25 by kevinlinxc | 2 comments on Hacker News.
Hi everyone, my name is Kevin Lin, and this is a Show HN for my sheet music smart glasses project. My video was on the front page on Friday: https://ift.tt/eHE2pSO , but dang said we should do a Show HN as well, so here goes! I’ve wanted to put sheet music into smart glasses for a long time, but the perfect opportunity to execute came in mid-February, when Mentra (YC W25) tweeted about a smart glasses hackathon they were hosting - winners would get to take home a pair. I went, had a blast making a bunch of music-related apps with my teammate, and we won, so I got to take them home, refine the project, and make a pretty cool video about it ( https://www.youtube.com/watch?v=j36u2i7PKKE ). The glasses are Even Realities G1s. They look normal, but they have two microphones, a screen in each lens, and can be even made with a prescription. Every person I’ve met who tried them on was surprised at how good the display is, and the video recordings of them unfortunately don’t do them justice. The software runs on AugmentOS, which is Mentra’s smart glasses operating system that works on various 3rd-party smart glasses, including the G1s. All I had to do to make an app was write and run a typescript file using the AugmentOS SDK. This gives you the voice transcription and raw audio as input, and text or bitmaps available as output to the screens, everything else is completely abstracted away. Your glasses communicate with an AugmentOS app, and then the app communicates with your typescript service. The only hard part was creating a Python script to turn sheet music (MusicXML format) into small, optimized bitmaps to display on the screens. To start, the existing landscape of music-related Python libraries is pretty poorly documented and I ran into multiple never-before-seen error messages. Downscaling to the small size of the glasses screens also meant that stems and staff lines were disappearing, so I thought to use morphological dilation to emphasize those without making the notes unintelligible. The final pipeline was MusicXML -> music21 library to render chunks of bars to png -> dilate with opencv- > downscale -> convert to bitmap with Pillow -> optimize bitmaps with imagemagick. This is far from the best code I’ve ever written, but the LLMs attempt at this whole task was abysmal and my years of Python experience really got to shine here. The code is on GitHub: https://ift.tt/P7Ftc9b . Putting it together, my typescript service serves these bitmaps locally when requested. I put together a UI where I can navigate menus and sheet music with voice commands (e.g. show catalog, next, select, start, exit, pause) and then I connected foot pedals to my laptop. Because of bitmap sending latency (~3s right now, but future glasses will do better), using foot pedals to turn the bars while playing wasn’t viable, so I instead had one of my pedals toggle autoscrolling, and the other two pedals sped up/temporarily paused the scrolling. After lots of adjustments, I was able to play a full song using just the glasses! It took many takes and there was definitely lots of room for improvement. For example: - Bitmap sending is pretty slow, which is why using the foot pedals to turn bars wasn’t viable. - The resolution is pretty small, I would love to put more bars in at once so I can flip less frequently. - Since foot pedals aren’t portable, it would be cool to have a mode where the audio dictates when the sheet music changes. I tried implementing that with FFT but it was often wrong and more effort is needed. Head tilt controls would be cool too, because full manual control is a hard requirement for practicing. All of these pain points are being targeted by Mentra and other companies competing in the space, and so I’m super excited to see the next generation! Also, feel free to ask me anything!