Teleproxy
70 subscribers
57 links
Teleproxy — the fastest, most secure MTProto proxy with the highest DPI resistance in existence.
Download Telegram
📋 New Issue #55: Can teleproxy be published through Cloudflare IPs ?

by Turreterror42
View issue
💬 Comment on #54: Replace assertion crashes with graceful error handling for network input

@kavore can you add actual tests according to the test plan? Thank you?

View comment
💬 Comment on #54: Replace assertion crashes with graceful error handling for network input

@kavore there are E2E tests which run against real Telegram test servers, see previous run at https://github.com/teleproxy/teleproxy/actions/runs/24077198308/job/70228507792

They run via GitHub actions: https://github.com/teleproxy/teleproxy/blob/main/.github/workflows/test.yml

New tests to your stated issue should be added: they should fail on current Teleproxy source to ensure your PR fixes things without regressing other features. In a nutshell - the way it failed for you should be codified into a test.

While E2E test may be challenging because it's poorly documented, at least add some unit tests.

View comment
📋 New Issue #56: Number of connections.

by master7xx
View issue
📋 New Issue #57: "teleproxy check" does not use upstream socks5 proxy

by MrKsey
View issue
🚀 New Release: v4.11.0


SOCKS5 upstream support in check command (#57), Cloudflare Spectrum docs (#55).

- teleproxy check now routes DC probes through the configured SOCKS5 proxy.
New --socks5 URL CLI flag; also reads from TOML config.
- Handle buffer allocation failures gracefully instead of crashing (#58).
- Fix PROXY protocol metrics always reporting 0 in multi-worker mode (#53).
- New deployment guide: Cloudflare Spectrum.

Release notes | GitHub
👍3
📋 New Issue #59: OOM / server hangs

by koznov
View issue
💬 Comment on #59: OOM / server hangs

Not a memory leak — the default MAX_CONNECTIONS=60000 is too aggressive for a 2 GB machine.

The dominant memory consumer is kernel TCP socket buffers: each open socket allocates ~46 KB of kernel memory (tcp_rmem + tcp_wmem defaults), so 60k sockets alone need ~2.7 GB — more than your total RAM. The proxy has userspace protections (LRU eviction, per-connection buffer caps), but those can't control kernel-side allocation, which is what triggers OOM.

Lowering the default to 10,000 in the next release. For now, set MAX_CONNECTIONS=10000 in your Docker environment — that's safe for 2 GB and handles typical proxy loads. If you need more, scale up proportionally ((RAM_MB - 300) * 10 is a reasonable upper bound).

Periodic restart shouldn't be necessary with a correct connection limit. If OOM persists after lowering it, check net.ipv4.tcp_rmem / net.ipv4.tcp_wmem sysctl values.

Will also add a tuning guide to the docs.

View comment
📋 New Issue #60: Don't load pics and videos by chats

by NickVStepin
View issue
💬 Comment on #60: Doesn't load pics and videos by chats in direct mode

Let's keep this open for direct mode fixes

View comment
📋 New Issue #61: Download proxyConfig via proxy

by metro2030
View issue
📋 New Issue #62: [Feature request] Upgrade "Custom TLS Backend"

by PentiumB
View issue
📋 New Issue #63: question about connections

by PentiumB
View issue
📋 New Issue #65: You cannot use a local proxy through a Docker container.

by egorovna26
View issue
📋 New Issue #66: Incorrect port in link

by dushatv
View issue
💬 Comment on #64: Adding an improved version of the dashboard

Thanks @PentiumB — merged into main in 4caa7f7 with your commit f084765 recorded as a merge parent in c913f7a so the attribution is preserved in git history.

Apologies for the PR showing as closed rather than merged — that's a mishap on my side with how the commit landed, not a rejection. The dashboard is shipped as-is in dashboards/teleproxy-instance.json.

Small follow-ups I'll handle in a separate commit: change uid from teleproxy to teleproxy-instance so it doesn't collide with the existing dashboard on import, and strip the hardcoded current.value / datasource uid so the file is portable. Nothing you need to do.

View comment
📋 New Issue #67: Incorrect key

by IojkinKot
View issue
📋 New Issue #69: RFC: WebSocket transport (Type3) as complementary deployment mode — coexist with existing nginx, front via free Cloudflare Workers

by toxeh
View issue
📋 New Issue #70: Metrics bug: bytes_sent/received counters appear swapped; unique_ips always 0

by qcode-star
View issue
🚀 New Release: v4.12.0


Bug fixes for Docker deployments and per-secret metrics.

- Fix teleproxy_secret_unique_ips always reporting 0 (#70). The counter was
only incremented when a secret had max_ips or rate_limit configured;
plain secrets are now tracked too.
- Clarify teleproxy_secret_bytes_received_total / _sent_total HELP text:
"received" is uploads (proxy from clients), "sent" is downloads (proxy to
clients). The counters are direct-mode only; relay-mode aggregation is a
separate gap, tracked for a follow-up.
- Change teleproxy_secret_unique_ips TYPE from gauge to counter to
match its actual cumulative behaviour.
- Fix Docker SECRET=hex:label,hex:label writing the entire string as the
TOML key instead of splitting label off (#67). The numbered-secret path
(SECRET_LABEL_N) was already correct.
- Add EXTERNAL_PORT env var for advertising a different port in the
connection link than the internal listen port (#66) — needed when Docker
maps -p 4443:443. Also added a matching external_port TOML option,
consumed by the /link HTML page and teleproxy link URL builder.

Release notes | GitHub
📋 New Issue #71: IP tracking table full for secret 0

by koznov
View issue