GitHub Trends
10.2K subscribers
15.4K links
See what the GitHub community is most excited about today.

A bot automatically fetches new repositories from https://github.com/trending and sends them to the channel.

Author and maintainer: https://github.com/katursis
Download Telegram
#javascript #agent #agentic #agentic_ai #ai #ai_agents #automation #cursor #design #figma #generative_ai #llm #llms #mcp #model_context_protocol

Cursor Talk to Figma MCP lets Cursor AI read and edit your Figma designs directly, using tools like `get_selection` for info, `set_text_content` for bulk text changes, `create_rectangle` for shapes, and `set_instance_overrides` for components. Setup is quick: install Bun, run `bun setup` and `bun socket`, add the Figma plugin. This saves you hours by skipping context switches, automating repetitive tasks like text replacement or override propagation, speeding up design-to-code workflows, and keeping everything in sync for faster, precise builds.

https://github.com/grab/cursor-talk-to-figma-mcp
#go #bpf #cncf #cni #containers #ebpf #k8s #kernel #kubernetes #kubernetes_networking #loadbalancing #monitoring #networking #observability #security #troubleshooting #xdp

Cilium is an eBPF-based tool for Kubernetes that delivers fast networking, deep visibility, and strong security. It creates simple Layer 3 networks across clusters, handles load balancing to replace kube-proxy, enforces identity-based policies from L3 to L7 (like HTTP or DNS rules), supports service mesh with encryption, and offers Hubble for real-time traffic monitoring. Stable versions like v1.18.6 run on AMD64/AArch64. You gain scalable performance, easier policy management without IP hassles, better troubleshooting, and higher efficiency for large cloud-native apps, cutting costs and boosting reliability.

https://github.com/cilium/cilium
#typescript

Eigent is an open-source desktop application that lets you build and deploy a custom AI workforce to automate complex tasks. It uses multiple specialized agents working in parallel—like a Developer Agent for coding, a Search Agent for web research, and a Document Agent for file management—to handle sophisticated workflows efficiently. You can run it locally on your own computer for complete privacy and control, or use the cloud version for quick setup. The main benefit is boosting productivity by automating multi-step processes like report generation, market research, and data analysis without requiring technical configuration, while keeping your data completely private.

https://github.com/eigent-ai/eigent
#shell

OpenCode now supports Claude Max/Pro subscriptions through the `opencode-anthropic-auth` plugin, allowing you to use your Claude subscription with both Claude Code and OpenCode in your terminal. This integration works with Gentleman.Dots, a complete development environment configuration that includes Neovim with AI assistants, multiple shells (Fish, Zsh, Nushell), terminal multiplexers (Tmux, Zellij), and various terminal emulators. You can install it via Homebrew or direct download across macOS, Linux, and Android platforms. The setup includes an interactive TUI installer that automatically configures your preferred tools, plus a Vim Mastery Trainer for learning editor shortcuts through progressive lessons and boss fights. This gives you a fully integrated AI-powered coding environment optimized for terminal-based development workflows.

https://github.com/Gentleman-Programming/Gentleman.Dots
#typescript #acp #ai #ai_agent #banana #chat #chatbot #claude_code #codex #cowork #excel #gemini #gemini_cli #gemini_pro #llm #multi_agent #nano_banana #office #qwen_code #skills #webui

AionUi is a free, open-source app that gives your CLI AI tools like Gemini CLI, Claude Code, and Qwen Code a simple graphical interface on macOS, Windows, or Linux. It auto-detects them for easy chatting, saves talks locally with multi-sessions, organizes files smartly, previews 9+ formats like PDF or code instantly, generates/editing images, and offers web access. You benefit by ditching complex commands for quick, secure AI help in office tasks, coding, or data work—saving time and boosting productivity without data leaving your device.

https://github.com/iOfficeAI/AionUi
1
#python #agent #ai #aippt #editable_pptx #langgraph #paper2slides #ppt_generator

Paper2Any turns paper PDFs, images, or text into editable diagrams, technical roadmaps, experiment plots, PPT slides, and more with one click. Key tools include Paper2Figure for scientific visuals, Paper2PPT for custom decks with table extraction, PDF2PPT for layout-perfect conversions, and AI beautification. Install via GitHub on Python 3.11+, Linux preferred; try online demo or scripts. You save hours recreating figures or slides for research, talks, or reports, getting pro-quality, customizable outputs fast.

https://github.com/OpenDCAI/Paper2Any
#shell

Try is a simple Ruby tool that organizes your coding experiments in one folder like ~/src/tries, using fuzzy search to quickly find or create dated directories (e.g., 2025-01-18-redis-test). Install via `gem install try-cli` or curl the single file, then add `eval "$(try init)"` to your shell—no setup needed. It ranks recent projects highest with smart matching, so you avoid scattered "test" folders and lost /tmp work. This saves time jumping between ideas, keeping your chaotic projects instantly accessible and productive.

https://github.com/tobi/try
#python #audio #deeplearning #minicpm #python #pytorch #speech #speech_synthesis #text_to_speech #tts #tts_model #voice_cloning

VoxCPM is a free, open-source TTS tool that turns text into realistic speech without tokens, creating expressive audio that matches context and clones voices perfectly from just 3-10 seconds of sample. Download VoxCPM1.5 (800M params) from Hugging Face, install via pip, and use simple Python or CLI commands for fast synthesis (RTF 0.15 on RTX 4090) or fine-tuning your own voices. You benefit by easily making natural audiobooks, podcasts, clones, or apps with pro-quality sound—saving time and costs on voice work.

https://github.com/OpenBMB/VoxCPM
1
#c_lang

TaskExplorer is a powerful Windows task manager that gives you deep insight into what your applications are doing in real-time. It displays process information in easy-to-use panels showing threads, memory, network connections, and system resources without cluttering your screen. You benefit from advanced diagnostic tools like stack traces for finding performance problems, memory editing capabilities, and detailed monitoring of disk operations and network activity. The streamlined interface lets you navigate quickly using arrow keys while watching live updates, making it ideal for troubleshooting software issues, optimizing system performance, and detecting problems that standard Task Manager cannot reveal.

https://github.com/DavidXanatos/TaskExplorer
#html #ai_agent_tools #ai_agents #ai_tools #code_execution #code_executor #code_runner #competitive_programming #online_compiler #online_judge #online_judges #onlinejudge #onlinejudge_solution

Judge0 is a free, open-source tool that safely runs code from over 90 languages online. It's fast, scalable, and sandboxed for AI agents, coding platforms, e-learning, and job tests. Use its simple API or Python SDK to execute code easily—self-host it or try the cloud. This helps you build apps quickly without managing servers, saving time and ensuring secure code testing.

https://github.com/judge0/judge0
👍1
#python

Grok-1 is a powerful open-source AI model with 314 billion parameters that you can download and run on your own computer. To use it, download the model weights, install required software packages, and run a simple Python script to test it. The model uses a Mixture of Experts architecture with 64 layers and can process up to 8,192 tokens of text at once. The main benefit is that you get access to a large, capable language model under an open Apache 2.0 license, allowing you to experiment with advanced AI technology locally. However, you'll need a powerful GPU with substantial memory to run it effectively.

https://github.com/xai-org/grok-1
#typescript #agent #agents #ai #assistant #assistant_chat_bots #generative_ui #js #react #reactjs #ui #ui_components

Tambo AI is a free React SDK that lets AI generate and control your app's UI from natural language chats, like showing charts or updating notes without clicks. Register components with simple Zod schemas, wrap in TamboProvider, and use hooks for streaming chats. It beats manual wiring with MCP tools, self-hosting, and templates. You save hours prototyping adaptive apps that fit every user—newbies see basics, pros get advanced views—cutting support needs and boosting speed.

https://github.com/tambo-ai/tambo
#python

The Compound Marketplace provides a Claude Code plugin that transforms your development workflow through a cycle of planning, working, reviewing, and documenting learnings. By spending 80% of effort on thorough planning and code review while only 20% on execution, you build knowledge that makes each subsequent task easier. Commands like `/workflowswork` executes them with task tracking, `/workflowscompound` documents patterns for reuse. This approach prevents technical debt accumulation, keeping your codebase maintainable and future changes straightforward.

https://github.com/EveryInc/compound-engineering-plugin
#objective_c #3rd_party_mouse #invert_scrolling #mac_mouse #mouse #mouse_events #mousewheel #remap #remapping #scroll #scrolling #smooth_scrolling #symbolic_hotkeys #tools #utility

Mac Mouse Fix enhances your regular mouse on Mac with smooth scrolling, natural trackpad gestures like Mission Control or Spaces switching, and customizable buttons for keyboard shortcuts—even Apple keys for volume or brightness. Download free from the website (version 3 for macOS 11+; version 2 stays free forever). It boosts productivity by making any $10 mouse feel better than an Apple Trackpad, saving time on navigation and controls.

https://github.com/noah-nuebling/mac-mouse-fix
#cplusplus

FlashMLA is DeepSeek's optimized attention library that makes AI models run faster and use less memory. It works with advanced NVIDIA GPUs to speed up how language models process information, achieving up to 660 trillion floating-point operations per second. The library supports both dense and sparse attention modes, meaning it can focus on important tokens while skipping less relevant ones, reducing computational waste. For you, this means faster AI responses, lower costs for running large language models, and better performance on tasks like chatbots and code generation. The technology is open-source and integrates with popular AI frameworks like PyTorch and Hugging Face, making it accessible for developers building next-generation AI applications.

https://github.com/deepseek-ai/FlashMLA