Reddit Programming
210 subscribers
1.22K photos
124K links
I will send you newest post from subreddit /r/programming
Download Telegram
Automating Vercel Deploys on Private Repos (Workaround Idea)
https://www.reddit.com/r/programming/comments/1npxaag/automating_vercel_deploys_on_private_repos/

<!-- SC_OFF -->So I’ve been playing with a problem I ran into while working on a side project, and I thought I’d share the idea + hack I came up with. Curious if anyone has tried something similar. The Problem On Vercel’s free plan, private repos auto-deploy only when there’s a new commit by the repo owner. You can’t manually trigger a deploy for a private repo. If a collaborator pushes commits, those changes won’t be deployed unless the repo owner also pushes something. The current workaround is trivial: I usually just add a fake commit like changing a character in the README.md, which triggers the pipeline and deploys the actual code. Annoying and manual. Solution (Source Code (https://github.com/satvikprsd/AutoBot)) I built a small Node.js server that: Listens to GitHub webhooks (push events). If someone else pushes code, the server appends a log line to auto_deploy_log.txt with a timestamp + author. The server then commits & pushes that trivial change using repo owner's account (using github token). Vercel sees a new commit → boom, auto-deploy triggered, no manual step needed. Would love any feedback on this. <!-- SC_ON --> submitted by /u/Deathfile78 (https://www.reddit.com/user/Deathfile78)
[link] (https://github.com/satvikprsd/AutoBot) [comments] (https://www.reddit.com/r/programming/comments/1npxaag/automating_vercel_deploys_on_private_repos/)
A step by step guide on how to build a LLM from scratch
https://www.reddit.com/r/programming/comments/1nq0166/a_step_by_step_guide_on_how_to_build_a_llm_from/

<!-- SC_OFF -->I wanted to share this here and hopefully it will help some folks to get deeper in this and help learn. I just published a comprehensive guide on how to build a LLM from scratch using historical London texts from 1500-1850. What I Built: Two identical models (117M & 354M parameters) trained from scratch Custom historical tokenizer with 30k vocabulary + 150+ special tokens for archaic English Complete data pipeline processing 218+ historical sources (500M+ characters) Production-ready training with multi-GPU support, WandB integration, and checkpointing Published models on Hugging Face ready for immediate use Why This Matters: Most LLM guides focus on fine-tuning existing models. This series shows you how to build from the ground up—eliminating modern biases and creating models that truly understand historical language patterns, cultural contexts, and period-specific knowledge. Resources: Blog Series: https://blog.desigeek.com/post/2025/09/building-llm-from-scratch-part1/ Complete Codebase: https://github.com/bahree/helloLondon Published Models: https://huggingface.co/bahree/london-historical-slm LinkedIn (if that's your thing): https://www.linkedin.com/feed/update/urn:li:share:7376863225306365952/ The models are already working and generating authentic 18th-century London text. Perfect for developers who want to understand the complete LLM development pipeline. Shoutout: Big thanks to u/Remarkable-Trick-177 (https://www.reddit.com/user/Remarkable-Trick-177/) for the inspiration! <!-- SC_ON --> submitted by /u/amitbahree (https://www.reddit.com/user/amitbahree)
[link] (https://blog.desigeek.com/post/2025/09/building-llm-from-scratch-part1/) [comments] (https://www.reddit.com/r/programming/comments/1nq0166/a_step_by_step_guide_on_how_to_build_a_llm_from/)
Table sorting
https://www.reddit.com/r/programming/comments/1nq05mi/table_sorting/

<!-- SC_OFF -->Yes, that simple table sorting. 10 years ago when I started my career that was the "take home" assignment. Today after trying to sort some simple values from a website I am amazed this problem was not solved yet. Just include the god damn sorting in the HTML spec and be done with it. Every table everywhere gets sort capabilities without coding. Thanks for reading my 3AM rant. <!-- SC_ON --> submitted by /u/FrostyCartoonist8523 (https://www.reddit.com/user/FrostyCartoonist8523)
[link] (https://localhost.com/) [comments] (https://www.reddit.com/r/programming/comments/1nq05mi/table_sorting/)
Create systems of equations and basic algebra app
https://www.reddit.com/r/programming/comments/1nq0tif/create_systems_of_equations_and_basic_algebra_app/

<!-- SC_OFF -->I want to create an app to 1. Use letters, Greek characters and subscripts for variable and equations. So typing “/Omega” will make Ω appear in its place. Perhaps there would be a panel of Greek characters I could click on as well 2. Input known variables 3. Input the equations apart of the system of equations 4. Automatically solve the system of equations 5. Add additional equations that will utilize the results of solving the system of equations. 6. Store equations I use over and over that I can quickly select. 7. The equations and values will need to be changeable at any point of the process I want the UI to be clean and the subscripts to actually look like subscripts when they are imputed and outputted. What language(s) should I use to create this? <!-- SC_ON --> submitted by /u/SkiMtVidGame-aineer (https://www.reddit.com/user/SkiMtVidGame-aineer)
[link] (http://thisisnotareallink.com/) [comments] (https://www.reddit.com/r/programming/comments/1nq0tif/create_systems_of_equations_and_basic_algebra_app/)
A smart way to get C++ speed for voice AI in Python: a look at the TEN framework
https://www.reddit.com/r/programming/comments/1nq11of/a_smart_way_to_get_c_speed_for_voice_ai_in_python/

<!-- SC_OFF -->We all know that getting real-time performance in Python can be tricky, especially with I/O-heavy tasks like audio streaming. I've been looking for a good way to tackle this without having to rewrite everything in C++. I recently stumbled upon the TEN framework, and its architecture is clever. It uses a high-performance C++ core for the heavy lifting but has a clean, first-class Python API. Their new v0.10 release really refines this, so you can write all your main logic in Python and let the C++ backend handle the speed-critical parts. It’s the same hybrid approach that makes libraries like NumPy so powerful. They've also built out a whole suite of tools for things like voice activity and turn detection, so you're not starting from scratch. If you're building any application where responsiveness is critical, this project is definitely worth a look. It seems like it's built by engineers who've actually faced these problems before. <!-- SC_ON --> submitted by /u/Global-Biscotti-8449 (https://www.reddit.com/user/Global-Biscotti-8449)
[link] (https://github.com/TEN-framework) [comments] (https://www.reddit.com/r/programming/comments/1nq11of/a_smart_way_to_get_c_speed_for_voice_ai_in_python/)