Stuff
3 subscribers
200K links
Download Telegram
Itter.sh – Micro-Blogging via Terminal
8 by rrr_oh_man | 2 comments on Hacker News.
Show HN: A backend agnostic Ruby framework for building reactive desktop apps
8 by zero-st4rs | 2 comments on Hacker News.
For a year or two I've been building a UI library with the aim of making desktop applications fun and easy to write. It is currently written in C/Ruby and uses a custom tree sitter grammar to describe templates. Features include: composable UI components, template directives, event and prop handling, slots, styles and automation capabilities baked in. One of the goals of the project is privileging easy to write custom components via a drawing API over providing a fixed set of components. At the time of this writing it should install on Windows, Mac, and Linux, but sometimes it is hard to test the install on all platforms. I'd love to hear any feedback or suggestions on the project. It is still in an early stage, but it would be great to hear thoughts. Docs are here: https://ift.tt/WfDEh2y Repo is here: https://ift.tt/LGJ21c7 Licensed under the PPL
Show HN: BlenderQ – A TUI for managing multiple Blender renders
6 by TechSquidTV | 0 comments on Hacker News.
Hi HN, I’m a solo content-creator/Blender user and developed this tool as an easy way to manage and render multiple Blender renders locally. The TUI portion is written in TypeScript because it gave me a good way to build the front end that allowed for some complex components in a language that I was intimately familiar with, and the portions that interact with Blender are actually Python scripts.
Show HN: Oliphaunt – A Native Mastodon Client for macOS
5 by anosidium | 0 comments on Hacker News.
I’ve been building Oliphaunt, a native Mastodon client for macOS, as a solo project — designed to be fast, lightweight and feel right at home on the Mac. It’s not built with Catalyst or Electron framework. Key features: • Native macOS UI using AppKit with some SwiftUI integration (not a web wrapper)• Core Data for local caching• Responsive, keyboard-friendly interface• UX tailored for desktop-class Mac computers• Supports multiple accounts, cross-instance timelines and search You can try it via TestFlight (macOS 14+ Sonoma): https://ift.tt/YNEj2VC Feedback is welcome here, on GitHub, or via TestFlight: https://ift.tt/ikOQArT
Launch HN: Nao Labs (YC X25) – Cursor for Data
1 by ClaireGz | 0 comments on Hacker News.
Hey HN, we’re Claire and Christophe from nao Labs ( https://getnao.io/ ). We just launched nao, an AI code editor to work with data: a local editor, directly connected with your data warehouse, and powered by an AI copilot with built-in context of your data schema and data-specific tools. See our demo here: https://www.youtube.com/watch?v=QmG6X-5ftZU Writing code with LLMs is the new normal in software engineering. But not when it comes to manipulating data. Tools like Cursor don’t interact natively with data warehouses — they autocomplete SQL blindly, not knowing your data schema. Most of us are still juggling multiple tools: writing code in Cursor, checking results in the warehouse console, troubleshooting with an observability tool, and verifying in BI tool no dashboard broke. When you want to write code on data with LLMs, you don’t care much about the code, you care about the data output. You need a tool that helps you write code relevant for your data, lets you visualize its impact on the output, and quality check it for you. Christophe and I have each spent 10 years in data — Christophe was a data engineer and has built data platforms for dozens of orgs, I was head of data and helped data teams building their analytics & data products. We’ve seen how the business asks you to ship data fast, while you’re here wondering if this small line of code will mistakenly multiply the revenue on your CEO dashboard by x5. Which leaves you 2 choices: test extensively and ship slow. Not test and ship fast. That’s why we wanted to create nao: a tool really adapted to our data work, that would allow data teams to ship at business pace. nao is a fork of VS Code, with built-in connectors for BigQuery, Snowflake, and Postgres. We built our own AI copilot and tab system, gave them a RAG of your data warehouse schemas and of your codebase. We added a set of agent tools to query data, compare data, understand data tools like dbt, assess the downstream impact of code in your whole data lineage. The AI tab and the AI agent write straight away code matching your schema, may it be for SQL, python, yaml. It shows you code diffs and data diffs side by side, to visualize what your change did to the data output. And you can leave the data quality checks to the agent: detect missing or duplicated values, outliers, anticipate breaking changes downstream or compare dev and production data differences. Data teams usually use nao for writing SQL pipelines, often with dbt. It helps them create data models, document them, test them, while making sure they’re not breaking data lineage and figures in the BI. In run mode, they also use it to run some analytics, and identify data quality bugs in production. For less technical profiles, it’s also a great help to strengthen their code best practices. For large teams, it ensures that the code & metrics remain well factorized and consistent. Software engineers use nao for the database exploration part: write SQL queries with nao tab, explore data schema with the agent, and write DDL. Question we often get is: why not just use Cursor and MCPs? Cursor has to trigger many MCP calls to get full context of the data, while nao has it always available in one RAG. MCPs stay in a very enclosed part of Cursor: they don’t bring data context to the tab. And they don’t make the UI more adapted to data workflows. Besides, nao comes as pre-packaged for data teams: they don’t have to set up extensions, install and authenticate in MCPs, build CI/CD pipelines. Which means even non-technical data teams can have a great developer experience. Our long-term goal is to become the best place to work with data. We want to fine-tune our own models for SQL, Python and YAML to give the most relevant code suggestions for data. We want to enlarge our comprehension of all data stack tools, to become the only agnostic editor for any of your data workflow. You can try it here: https://ift.tt/Mg29TZf - download…
Show HN: Hydra (YC W22) – Serverless Analytics on Postgres
11 by coatue | 2 comments on Hacker News.
Hi HN, Hydra cofounders (Joe and JD) here ( https://www.hydra.so/ )! We enable realtime analytics on Postgres without requiring an external analytics database. Traditionally, this was unfeasible: Postgres is a rowstore database that’s 1000X slower at analytical processing than a columnstore database. (A quick refresher for anyone interested: A rowstore means table rows are stored sequentially, making it efficient at inserting / updating a record, but inefficient at filtering and aggregating data. At most businesses, analytical reporting scans large volumes of events, traces, time-series data. As the volume grows, the inefficiency of the rowstore compounds: i.e. it's not scalable for analytics. In contrast, a columnstore stores all the values of each column in sequence.) For decades, it was a requirement for businesses to manage these differences between the row and columnstore’s relative strengths, by maintaining two separate systems. This led to large gaps in both functionality and syntax, and background knowledge of engineers. For example, here are the gaps between Redshift (a popular columnstore) and Postgres (rowstore) features: ( https://ift.tt/kDwZAsU... ). We think there’s a better, simpler way: unify the rowstore and columnstore – keep the data in one place, stop the costs and hassle of managing an external analytics database. With Hydra, events, traces, time-series data, user sessions, clickstream, IOT telemetry, etc. are now accessible as a columnstore right alongside my standard rowstore tables. Our solution: Hydra separates compute from storage to bring the analytics columnstore with serverless processing and automatic caching to your postgres database. The term "serverless" can be a bit confusing, because a server always exists, but it means compute is ephemeral and spun up and down automatically. The database automatically provisions and isolates dedicated compute resources for each query process. Serverless is different from managed compute, where the user explicitly chooses to allocate and scale CPU and memory continuously, and potentially overpay during idle time. How is serverless useful? It's important that every analytics query has its own resources per process. The major hurdles with running analytics on Postgres is 1) Rowstore performance 2) Resource contention. #2 is very often overlooked - but in practice, when analytics queries are run they tend to hog resources (RAM and CPU) from Postgres transactional work. So, a slightly expensive analytics query has the ability to slow down the entire database: that's why serverless is important: it guarantees the expensive queries are isolated and run on dedicated database resources per process. why is hydra so fast at analytics? ( https://ift.tt/vSlrsF2 ) 1) columnstore by default 2) metadata for efficient file-skipping and retrieval 3) parallel, vectorized execution 4) automatic caching what’s the killer feature? hydra can quickly join columnstore tables with standard row tables within postgres with direct sql. example: “segment events as a table.” Instead of dumping segment event data into a s3 bucket or external analytics database, use hydra to store and join events (clicks, signups, purchases) with user profile data within postgres. know your users in realtime: “what events predict churn?” or “which user will likely convert?” is immediately actionable. Thanks for reading! We would love to hear your feedback and if you'd like to try Hydra now, we offer a $300 credit and 14-days free per account. We're excited to see how bringing the columnstore and rowstore side-by-side can help your project.
The Anarchitecture Group
8 by jruohonen | 1 comments on Hacker News.
Inventing the Adventure Game (1984)
11 by CaesarA | 0 comments on Hacker News.