GitHub Trends
10.6K subscribers
15.6K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#shell

Skills Public offers 16 reusable tools for Apple devs, like creating App Store notes from git, debugging iOS apps, fixing SwiftUI/React performance, running bug hunts, code reviews, and refactoring. Place skill folders in `$CODEX_HOME/skills` and check each `SKILL.md` for use. This saves you time on repeat tasks, boosts code quality, speeds debugging, and helps build better apps faster.

https://github.com/Dimillian/Skills
1
#go #distribution_spec #helm #kubernetes #oci #oci_distribution #opencontainers #zot

Zot is a lightweight, production-ready OCI-native container registry for storing images, Helm charts, SBOMs, and other artifacts without vendor lock-in. It offers built-in authentication (OIDC, LDAP), storage options (S3, Azure), scanning, caching to cut Docker Hub limits/latency, and ARM/edge support as a single binary. You benefit by easily self-hosting a secure, scalable alternative to Docker Hub, saving costs, boosting speed, and enabling secret-less workflows on any device.

https://github.com/project-zot/zot
#python #glm #image2text #ocr

GLM-OCR is a top 0.9B-parameter model for accurate OCR on complex documents like tables, code, formulas, seals, and receipts, scoring 94.62 on OmniDocBench V1.5. Install via `pip install glmocr`, use cloud API (no GPU needed) or self-host with vLLM/SGLang for fast, low-cost inference, and get JSON/Markdown outputs easily via CLI or Python. You benefit from quick, robust document parsing that saves time, cuts compute costs, and integrates simply into your apps for real-world tasks.

https://github.com/zai-org/GLM-OCR
#typescript

OMX is a workflow layer that enhances OpenAI Codex by adding better task routing, reusable skills, and project state management. It keeps Codex as the execution engine while introducing canonical workflows like `$deep-interview` for clarification, `$ralplan` for planning approval, and `$ralph` or `$team` for execution. The benefit is that you get a structured, consistent approach to complex coding tasks—clarify intent first, approve the plan, then execute with either persistent single-owner loops or coordinated parallel work—without replacing Codex itself, just making your day-to-day work more organized and efficient.

https://github.com/Yeachan-Heo/oh-my-codex
#typescript #electron #open_source #pixijs #screen_capture #screen_recorder

OpenScreen is a free, open-source app like a simple Screen Studio for making beautiful product demos and screen walkthroughs. Record your full screen or windows, add zooms, mic/system audio, backgrounds, motion blur, annotations, trims, speed changes, and export in any ratio. Download from GitHub for macOS, Linux, or Windows—it's MIT-licensed so you can use, modify, or sell it freely with just a copyright notice. You save $29/month, get core tools without limits for personal/commercial use, and customize fully.

https://github.com/siddharthvaddem/openscreen
1
#rust #filesearch #lua #neovim #neovim_plugin #rust

FFF is a super-fast fuzzy file finder for AI agents and Neovim users. It excels at grepping, fuzzy matching, and globbing with built-in memory that ranks results by frecency, git status, file size, and more for typo-proof searches. Install easily via script for AI (like Claude) or Lua for Neovim (keys like ff for files, fg for grep). This saves you time and tokens by finding code instantly, skipping useless files, and boosting productivity on big repos.

https://github.com/dmtrKovalenko/fff.nvim
#python #apple_silicon #florence2 #idefics #llava #llm #local_ai #mlx #molmo #paligemma #pixtral #vision_framework #vision_language_model #vision_transformer

MLX-VLM lets you run, chat with, and fine-tune Vision Language Models (VLMs) plus audio/video models on your Mac using MLX—install easily with `pip install -U mlx-vlm`. Use CLI for quick text/image/audio generation (e.g., `mlx_vlm.generate --model ... --image photo.jpg`), Gradio UI for chats, Python scripts, or a FastAPI server with OpenAI-compatible endpoints supporting multi-images/videos. Features like TurboQuant cut KV cache memory by 76%, and LoRA/QLoRA fine-tuning works on consumer hardware. You benefit by experimenting with powerful multimodal AI locally—fast, memory-efficient, no cloud costs, perfect for Mac users tweaking models affordably.

https://github.com/Blaizzy/mlx-vlm
#yara #awesome_list #blueteam #blueteam_tools #cti #detection #detection_engineering #dfir #hacktools #incident_response #ioc #iocs #ir #ransomware #redteam #rmm #security #siem #soc #threat_hunting #threat_intelligence

You can access comprehensive security detection lists and threat hunting resources that help identify malicious activity across your infrastructure. These curated collections include indicators like suspicious file hashes, domain names, IP addresses, and behavioral patterns organized by threat type—from ransomware and phishing to command-and-control servers and vulnerable drivers. By integrating these lists into your security tools like SIEM platforms and endpoint detection systems, you gain immediate visibility into known threats while learning detection methodologies through guides and YARA rules. This accelerates your ability to hunt for compromises, validate security controls, and stay current with emerging attack techniques without building detection logic from scratch.

https://github.com/mthcht/awesome-lists
#kotlin

Google AI Edge Gallery lets you run powerful open-source AI models like Gemma 4 on your phone offline, with features like smart agents, image analysis, voice transcription, and a prompt tester. It keeps everything private on your device for fast, secure use without internet. Download from Google Play or App Store to test advanced AI reasoning and creativity anytime, boosting your productivity and privacy on the go.

https://github.com/google-ai-edge/gallery
👍1
#cplusplus

LiteRT-LM is Google's free, high-speed tool for running large language models like Gemma 4 on phones, computers, Raspberry Pi, and more, with GPU boosts, vision/audio support, and tool use for smart apps. It powers AI in Chrome, Pixel Watch, and Chromebook—try it fast via CLI command on Linux, macOS, Windows, or Pi without coding. You benefit by easily deploying fast, private on-device AI for apps, prototyping, or edge projects, saving time and cloud costs.

https://github.com/google-ai-edge/LiteRT-LM
#java #minecraft #minecraft_mod #vulkan #vulkan_renderer

VulkanMod is a Fabric mod that replaces Minecraft Java's old OpenGL renderer with a modern Vulkan 1.2 engine, cutting CPU overhead, boosting GPU performance, and adding features like Wayland support and chunk optimizations for much higher FPS and smoother gameplay. Install Fabric loader, download the .jar from Modrinth or CurseForge, and drop it in your .minecraft/mods folder to enjoy lag-free worlds and better hardware use right away—perfect for high-res packs or busy servers.

https://github.com/xCollateral/VulkanMod
🤯1
#typescript

QMD is an on-device search engine that indexes your markdown notes, meeting transcripts, docs, and knowledge bases for fast keyword, semantic, or hybrid searches using local AI models—no internet needed. Install via npm or bun, add collections like `qmd collection add ~/notes --name notes`, embed with `qmd embed`, then query like `qmd query "project timeline"` for top results with scores and context. It integrates with AI agents via JSON output or MCP server. You benefit by quickly finding info across your files to boost productivity and make smarter decisions in agentic workflows.

https://github.com/tobi/qmd
#python

PersonaPlex is a real-time speech model for natural, low-latency conversations. Control its voice with audio prompts and role via simple text—like a friendly teacher, customer service rep, or casual chat partner—with natural male/female voices. Install easily, launch a web demo server, and test offline. You benefit by creating personalized AI interactions for apps, role-play, or fun talks, with quick setup and low GPU needs via CPU offload.

https://github.com/NVIDIA/personaplex
😢1
#python #ai_agents #ai_tutor #clawdbot #cli_tool #deepresearch #interactive_learning #large_language_models #multi_agent_systems #rag

DeepTutor v1.0.0 is an open-source AI tutoring tool with personalized TutorBots, unified chat modes for solving problems, quizzes, research, and math animations, plus knowledge bases from your PDFs, persistent memory of your learning style, AI co-writing, and guided plans—all via easy web, Docker, or CLI setup. You benefit by getting a smart, evolving study companion that adapts to you, boosts understanding with interactive tools, and saves time on tough topics without starting over.

https://github.com/HKUDS/DeepTutor
🔥1
#other

Use Karpathy-inspired guidelines in a single CLAUDE.md file to fix Claude's coding flaws like wrong assumptions, overcomplicated code, unnecessary edits, and poor goal-setting. Follow four rules: think explicitly before coding, prioritize simplicity, make only required changes, and use tests for verifiable success. Install via Claude plugin or curl command. You benefit with cleaner, minimal code, fewer errors, proactive questions, and self-correcting AI that delivers precise results faster.

https://github.com/forrestchang/andrej-karpathy-skills
6
#c_lang #jq

jq is a lightweight command-line tool like sed, awk, or grep, but for processing JSON data. It lets you easily slice, filter, map, and transform structured data with zero runtime dependencies. Install via prebuilt binaries from GitHub releases, Docker (e.g., `docker run --rm -i ghcr.io/jqlang/jq:latest < package.json '.version'` to extract version), or build from source. This saves you time handling JSON in scripts, APIs, or files efficiently without heavy software.

https://github.com/jqlang/jq
#typescript

Multica is an open-source platform that turns coding agents into real teammates. Assign tasks to them like colleagues—they write code, report issues, update progress, and build reusable skills over time, with no babysitting needed. Use Multica Cloud for instant start or self-host with Docker and CLI for local control; it works with Claude Code, Codex, and more. This saves you hiring costs for your next 10 developers, boosts team productivity, and lets humans and AI collaborate seamlessly on the same board.

https://github.com/multica-ai/multica
👍2
#other #awesome #awesome_list #design_systems #hacktoberfest #pattern_library #ui_library

A design system is a collection of documentation, principles, and reusable elements that helps teams build digital products consistently. It includes UI components, pattern libraries, style guides, and guidelines for accessibility and user experience. The main benefit is that design systems enable teams to work faster and more efficiently by providing pre-built, standardized pieces they can reuse across projects, ensuring visual consistency and reducing redundancy while creating a shared language across your organization.

https://github.com/alexpate/awesome-design-systems
#c_lang #aarch64 #arm #arm64 #bios #boot_loader #boot_manager #bootloader #efi #gpt #loongarch #loongarch64 #loongson #mbr #risc_v #riscv #riscv64 #uefi #x64 #x86 #x86_64

Limine is a modern bootloader that boots Linux and other OSes on x86, ARM64, RISC-V, and LoongArch64 hardware, supporting MBR/GPT partitions and FAT/ISO filesystems on 32-bit Pentium Pro+ or 64-bit systems. Get binaries via Git (e.g., `git clone --branch=v11.x-binary`), build tools with `make`, and join Matrix/Fluxer chats for help. This lets you easily manage and boot multiple OSes with a clean menu, saving time on custom PC or server setups.

https://github.com/Limine-Bootloader/Limine
👍1
#typescript

Ralph is an autonomous AI agent that loops coding tools like Amp or Claude Code to fully implement your project's Product Requirements Document (PRD) by tackling one small user story per fresh iteration, using git history, progress.txt, and prd.json for memory. Setup is simple: install prerequisites, copy scripts or skills to your repo, generate a PRD, convert to JSON, then run `./scripts/ralph/ralph.sh` for up to 10 iterations until all tasks pass checks and complete. This saves you hours of manual coding on greenfield features, delivering working code reliably with minimal supervision.

https://github.com/snarktank/ralph