Reddit Programming
212 subscribers
1.22K photos
125K links
I will send you newest post from subreddit /r/programming
Download Telegram
Native UI toolkit Slint 1.15 released 🎉
https://www.reddit.com/r/programming/comments/1qvslun/native_ui_toolkit_slint_115_released/

<!-- SC_OFF -->This release brings dynamic GridLayout (with `for` loops), two-way bindings on struct fields, Python type hints via slint-compiler, and improved iOS/Android support (safe area + virtual keyboard areas). <!-- SC_ON --> submitted by /u/slint-ui (https://www.reddit.com/user/slint-ui)
[link] (https://slint.dev/blog/slint-1.15-released) [comments] (https://www.reddit.com/r/programming/comments/1qvslun/native_ui_toolkit_slint_115_released/)
A Modern Python Stack for Data Projects (uv + ruff + ty + Marimo + Polars)
https://www.reddit.com/r/programming/comments/1qvxgvx/a_modern_python_stack_for_data_projects_uv_ruff/

<!-- SC_OFF -->I put together a template repo for Python data projects (linked in the article) and wrote up the “why” behind the tool choices and trade-offs. TL;DR stack in the template: uv for project + env management ruff for linting + formatting ty as a newer, fast type checker Marimo instead of Jupyter for reactive, reproducible notebooks that are just .py files Polars for local wrangling/analytics Curious what others are using in 2026 for this workflow, and where this setup falls short <!-- SC_ON --> submitted by /u/makeKarmaGreatAgain (https://www.reddit.com/user/makeKarmaGreatAgain)
[link] (https://www.mameli.dev/blog/modern-data-python-stack/) [comments] (https://www.reddit.com/r/programming/comments/1qvxgvx/a_modern_python_stack_for_data_projects_uv_ruff/)
has anyone tried using opentelemetry for local debugging instead of prod monitoring?
https://www.reddit.com/r/programming/comments/1qw9d5u/has_anyone_tried_using_opentelemetry_for_local/

<!-- SC_OFF -->i've been going down this rabbit hole with ai coding agents lately. they're great for boilerplate but kinda fall apart when you ask them to debug something non-trivial. my theory is that it's not a reasoning problem, it's an input problem. the ai only sees static code, so it's just guessing about what's happening at runtime. which branch of an if/else ran? what was the value of that variable? it has no idea.this leads to this stupid loop where it suggests a fix, it's wrong, you tell it it's wrong, and it just guesses again, burning through your tokens.so i had this idea, what if you could just give the ai the runtime context? like a flight recorder for your code. and then i thought about opentelemetry. we all use it for distributed tracing in prod, but the core tech is just instrumenting code and collecting data.i've been messing around with it for local dev. i built this thing that uses a custom otel exporter to write all the trace data to an in-memory ring buffer. it's always on but has a tiny footprint since it just overwrites old data. When any bug is triggered, it freezes the buffer and snapshots the last few seconds of execution history—stack traces, variables, the whole deal.Then it injects that data directly into the ai agent's context through a local server. So now, instead of my manual console.log dance, you just copy the Agent Skill into your Agent and ask "hey, debug this" like you normally would . the results are kinda wild. instead of guessing, the ai can say "ok, according to the runtime trace, this variable was null on line 42, which caused the crash." it's way more effective.I packaged it up into a tool called Syncause and open-sourced the Agent Skill part to make it easier to use. it feels like a much better approach than just dumping more source code into the context window. i'm still working on it, it's only been like 5 days lol <!-- SC_ON --> submitted by /u/NightRider06134 (https://www.reddit.com/user/NightRider06134)
[link] (https://goparttime.net/my-tasks/todo) [comments] (https://www.reddit.com/r/programming/comments/1qw9d5u/has_anyone_tried_using_opentelemetry_for_local/)