Explaining British Naval Dominance During the Age of Sail
10 by surprisetalk | 3 comments on Hacker News.
10 by surprisetalk | 3 comments on Hacker News.
Ed Smylie, Who Saved the Apollo 13 Crew with Duct Tape, Dies at 95
16 by sohkamyung | 1 comments on Hacker News.
16 by sohkamyung | 1 comments on Hacker News.
Show HN: Visual flow-based programming for Erlang, inspired by Node-RED
34 by Towaway69 | 10 comments on Hacker News.
34 by Towaway69 | 10 comments on Hacker News.
I'm Peter Roberts, immigration attorney, who does work for YC and startups. AMA
14 by proberts | 7 comments on Hacker News.
I'll be here for the next 5-6 hours. As usual, there are countless topics given the rapidly changing immigration landscape and I'll be guided by whatever you're concerned with. Please remember that I can't provide legal advice on specific cases because I won't have access to all the facts. Please stick to a factual discussion in your questions and I'll try to do the same in my answers.
14 by proberts | 7 comments on Hacker News.
I'll be here for the next 5-6 hours. As usual, there are countless topics given the rapidly changing immigration landscape and I'll be guided by whatever you're concerned with. Please remember that I can't provide legal advice on specific cases because I won't have access to all the facts. Please stick to a factual discussion in your questions and I'll try to do the same in my answers.
Show HN: Rv, a Package Manager for R
12 by Keats | 0 comments on Hacker News.
We have been building a package manager for R inspired by Cargo in Rust. The main idea behind rv is to be explicit about the R version in use as well as declaring which dependencies are used in a rproject.toml file for a given project. There's no renv::snapshot equivalent, everything needs to be declared up front, the config file (and resulting lockfile) is the source of truth. This avoids issue where renv might miss information about the installation and is also easy to tweak some packages, eg install one from source and install suggests from another. If you have used Cargo/npm/any Python package manager/etc, it will be very familiar.
12 by Keats | 0 comments on Hacker News.
We have been building a package manager for R inspired by Cargo in Rust. The main idea behind rv is to be explicit about the R version in use as well as declaring which dependencies are used in a rproject.toml file for a given project. There's no renv::snapshot equivalent, everything needs to be declared up front, the config file (and resulting lockfile) is the source of truth. This avoids issue where renv might miss information about the installation and is also easy to tweak some packages, eg install one from source and install suggests from another. If you have used Cargo/npm/any Python package manager/etc, it will be very familiar.
Show HN: Workflow Use – Deterministic, self-healing browser automation (RPA 2.0)
9 by gregpr07 | 2 comments on Hacker News.
Hey HN – Gregor & Magnus here again. A few months ago, we launched Browser Use ( https://ift.tt/XjzgD64 ), which let LLMs perform tasks in the browser using natural language prompts. It was great for one-off tasks like booking flights or finding products—but we soon realized enterprises have somewhat different needs: They typically have one workflow with dynamic variables (e.g., filling out a form and downloading a PDF) that they want to reliably run a million times without breaking. Pure LLM agents were slow, expensive, and unpredictable for these high-frequency tasks. So we just started working on Workflow Use: - You show the browser what to do (by manually recording steps; show don’t tell). - An LLM converts these recordings into deterministic scripts with variables (scripts include AI steps as well, where it’s 100% agentic) - Scripts run reliably, 10x faster, and ~90% cheaper than Browser Use. - If a step breaks, workflow will fallback to Browser Use and agentically run the step. (This self-healing functionality is still very early.) This project just kicked off, so lots of things will break, it’s definitely not production-ready yet, and plenty of stuff is still missing (like a solid editor and proper self-healing). But we wanted to share early, get feedback, and figure out what workflows you’d want to automate this way. Try it out and let us know what you think!
9 by gregpr07 | 2 comments on Hacker News.
Hey HN – Gregor & Magnus here again. A few months ago, we launched Browser Use ( https://ift.tt/XjzgD64 ), which let LLMs perform tasks in the browser using natural language prompts. It was great for one-off tasks like booking flights or finding products—but we soon realized enterprises have somewhat different needs: They typically have one workflow with dynamic variables (e.g., filling out a form and downloading a PDF) that they want to reliably run a million times without breaking. Pure LLM agents were slow, expensive, and unpredictable for these high-frequency tasks. So we just started working on Workflow Use: - You show the browser what to do (by manually recording steps; show don’t tell). - An LLM converts these recordings into deterministic scripts with variables (scripts include AI steps as well, where it’s 100% agentic) - Scripts run reliably, 10x faster, and ~90% cheaper than Browser Use. - If a step breaks, workflow will fallback to Browser Use and agentically run the step. (This self-healing functionality is still very early.) This project just kicked off, so lots of things will break, it’s definitely not production-ready yet, and plenty of stuff is still missing (like a solid editor and proper self-healing). But we wanted to share early, get feedback, and figure out what workflows you’d want to automate this way. Try it out and let us know what you think!
Foundry (YC F24) Is Hiring – Founding Engineer (ML × SWE)
1 by lakabimanil | 0 comments on Hacker News.
1 by lakabimanil | 0 comments on Hacker News.
New 'Superdiffusion' Proof Probes the Mysterious Math of Turbulence
5 by rbanffy | 0 comments on Hacker News.
5 by rbanffy | 0 comments on Hacker News.
The Magic Hours: The Films and Hidden Life of Terrence Malick
3 by mitchbob | 1 comments on Hacker News.
3 by mitchbob | 1 comments on Hacker News.
Show HN: KVSplit – Run 2-3× longer contexts on Apple Silicon
26 by dipampaul17 | 0 comments on Hacker News.
I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality. I patched llama.cpp to enable different bit-widths for keys vs. values on Apple Silicon. The results are surprising: - K8V4 (8-bit keys, 4-bit values): 59% memory reduction with only 0.86% perplexity loss- K4V8 (4-bit keys, 8-bit values): 59% memory reduction but 6.06% perplexity loss- The configurations use the same number of bits, but K8V4 is 7× better for quality This means you can run LLMs with 2-3× longer context on the same Mac. Memory usage scales with sequence length, so savings compound as context grows. Implementation was straightforward: 1. Added --kvq-key and --kvq-val flags to llama.cpp2. Applied existing quantization logic separately to K and V tensors3. Validated with perplexity metrics across context lengths4. Used Metal for acceleration (with -mlong-calls flag to avoid vectorization issues) Benchmarked on an M4 MacBook Pro running TinyLlama with 8K context windows. Compatible with Metal/MPS and optimized for Apple Silicon. GitHub: https://ift.tt/A0kwa2I
26 by dipampaul17 | 0 comments on Hacker News.
I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality. I patched llama.cpp to enable different bit-widths for keys vs. values on Apple Silicon. The results are surprising: - K8V4 (8-bit keys, 4-bit values): 59% memory reduction with only 0.86% perplexity loss- K4V8 (4-bit keys, 8-bit values): 59% memory reduction but 6.06% perplexity loss- The configurations use the same number of bits, but K8V4 is 7× better for quality This means you can run LLMs with 2-3× longer context on the same Mac. Memory usage scales with sequence length, so savings compound as context grows. Implementation was straightforward: 1. Added --kvq-key and --kvq-val flags to llama.cpp2. Applied existing quantization logic separately to K and V tensors3. Validated with perplexity metrics across context lengths4. Used Metal for acceleration (with -mlong-calls flag to avoid vectorization issues) Benchmarked on an M4 MacBook Pro running TinyLlama with 8K context windows. Compatible with Metal/MPS and optimized for Apple Silicon. GitHub: https://ift.tt/A0kwa2I