🚀 New Release: v4.8.0
DC health probes (#47).
- Periodic TCP handshake probes to all 5 Telegram DCs, exposed as Prometheus histograms (
- Disabled by default. Enable with
- Probes run in master process only with non-blocking poll for sub-millisecond accuracy
- Text stats include per-DC latency, average, count, and failure fields
Release notes | GitHub
DC health probes (#47).
- Periodic TCP handshake probes to all 5 Telegram DCs, exposed as Prometheus histograms (
teleproxy_dc_latency_seconds), failure counters, and last-latency gauges- Disabled by default. Enable with
--dc-probe-interval 30 (CLI), dc_probe_interval = 30 (TOML), or DC_PROBE_INTERVAL=30 (Docker env)- Probes run in master process only with non-blocking poll for sub-millisecond accuracy
- Text stats include per-DC latency, average, count, and failure fields
Release notes | GitHub
💬 Comment on #11: failed: auth error
A build cleanup script briefly removed platform manifests from the registry. It's been fixed and a new image is published.
Your container runtime cached the old manifest index. Force a fresh pull:
``
If you're on a system like Portainer or Watchtower that auto-updates, restart the update cycle so it re-fetches the index.
View comment
A build cleanup script briefly removed platform manifests from the registry. It's been fixed and a new image is published.
Your container runtime cached the old manifest index. Force a fresh pull:
``
docker pull ghcr.io/teleproxy/teleproxy:latest
``If you're on a system like Portainer or Watchtower that auto-updates, restart the update cycle so it re-fetches the index.
View comment
💬 Comment on #49: Manifest unknown error while pulling the image.
@ant-222 apologies for stealing your attention!
View comment
@ant-222 apologies for stealing your attention!
View comment
🚀 New Release: v4.9.0
PROXY protocol v1/v2 listener support.
-
- Auto-detects v1 (text) and v2 (binary) headers, extracts real client IP from load balancer
- IP ACLs re-checked against the real client IP after header parsing
- v2 LOCAL command accepted for health check probes
- New stats:
- Prometheus metrics:
Other changes:
- Fix auto-generated secret not written to TOML config
- TON wallet donation option
- Per-page SEO metadata, OpenGraph tags, JSON-LD structured data, robots.txt
- Complete Russian translation (100%), expanded Farsi and Vietnamese (38%)
- Merged duplicate issue notification workflows
Release notes | GitHub
PROXY protocol v1/v2 listener support.
-
--proxy-protocol CLI flag / proxy_protocol = true TOML config / PROXY_PROTOCOL=true Docker env- Auto-detects v1 (text) and v2 (binary) headers, extracts real client IP from load balancer
- IP ACLs re-checked against the real client IP after header parsing
- v2 LOCAL command accepted for health check probes
- New stats:
proxy_protocol_enabled, proxy_protocol_connections, proxy_protocol_errors- Prometheus metrics:
teleproxy_proxy_protocol_connections_total, teleproxy_proxy_protocol_errors_totalOther changes:
- Fix auto-generated secret not written to TOML config
- TON wallet donation option
- Per-page SEO metadata, OpenGraph tags, JSON-LD structured data, robots.txt
- Complete Russian translation (100%), expanded Farsi and Vietnamese (38%)
- Merged duplicate issue notification workflows
Release notes | GitHub
👍2
💬 Comment on #21: RPM Packages
Live at https://teleproxy.github.io/repo/. Install on EL9, EL10, AlmaLinux, Rocky, Fedora 41/42:
dnf install https://teleproxy.github.io/repo/teleproxy-release-latest.noarch.rpm
dnf install teleproxy
systemctl enable --now teleproxy
Signed with RSA 4096 / SHA-512 (RHEL 9 rpm-sequoia compatible). Verified end-to-end against v4.9.0 in Rocky 9, including upgrade-preserves-config and clean uninstall.
View comment
Live at https://teleproxy.github.io/repo/. Install on EL9, EL10, AlmaLinux, Rocky, Fedora 41/42:
dnf install https://teleproxy.github.io/repo/teleproxy-release-latest.noarch.rpm
dnf install teleproxy
systemctl enable --now teleproxy
Signed with RSA 4096 / SHA-512 (RHEL 9 rpm-sequoia compatible). Verified end-to-end against v4.9.0 in Rocky 9, including upgrade-preserves-config and clean uninstall.
View comment
🚀 New Release: v4.10.0
Graceful connection draining on secret removal (#45).
- Removing a secret via SIGHUP reload no longer drops in-flight connections.
The slot transitions to a draining state — new connections matching the
removed secret are rejected, but existing ones keep working until they
close naturally or
elapses, at which point stragglers are force-closed.
- Re-adding a draining secret revives the same slot — counters, byte totals,
and IP tracking carry over. Pinned
- New TOML option
- New stats:
- Slot capacity expanded to 16 active + up to 16 draining at any moment.
- Fix latent bug where the per-secret connection counter could go negative
if a TLS connection closed between handshake and obfs2 init.
RPM repository (#21).
- New signed dnf repository at https://teleproxy.github.io/repo/ serving
EL9, EL10, AlmaLinux, Rocky Linux, and Fedora 41/42 on x86_64 and aarch64.
- One-line install:
- Packages signed with RSA 4096 / SHA-512 (RHEL 9 rpm-sequoia compatible).
- Built automatically from the existing static linux binaries via nfpm,
driven by
- First install generates a random secret in
upgrades and removals never touch a user-edited config.
Release notes | GitHub
Graceful connection draining on secret removal (#45).
- Removing a secret via SIGHUP reload no longer drops in-flight connections.
The slot transitions to a draining state — new connections matching the
removed secret are rejected, but existing ones keep working until they
close naturally or
drain_timeout_secs (default 300, 0 = infinite)elapses, at which point stragglers are force-closed.
- Re-adding a draining secret revives the same slot — counters, byte totals,
and IP tracking carry over. Pinned
-S CLI secrets remain immutable.- New TOML option
drain_timeout_secs (reloadable).- New stats:
secret_<lbl>_draining, secret_<lbl>_drain_age_seconds,secret_<lbl>_rejected_draining, secret_<lbl>_drain_forced.- Slot capacity expanded to 16 active + up to 16 draining at any moment.
- Fix latent bug where the per-secret connection counter could go negative
if a TLS connection closed between handshake and obfs2 init.
RPM repository (#21).
- New signed dnf repository at https://teleproxy.github.io/repo/ serving
EL9, EL10, AlmaLinux, Rocky Linux, and Fedora 41/42 on x86_64 and aarch64.
- One-line install:
dnf install https://teleproxy.github.io/repo/teleproxy-release-latest.noarch.rpm && dnf install teleproxy.- Packages signed with RSA 4096 / SHA-512 (RHEL 9 rpm-sequoia compatible).
- Built automatically from the existing static linux binaries via nfpm,
driven by
repository_dispatch from the release workflow.- First install generates a random secret in
/etc/teleproxy/config.toml;upgrades and removals never touch a user-edited config.
Release notes | GitHub
📋 New Issue #53: Metric teleproxy_proxy_protocol_connections_total remains 0 despite active traffic and PROXY_PROTOCOL=true
by voiprostov
View issue
by voiprostov
View issue
💬 Comment on #54: Replace assertion crashes with graceful error handling for network input
@kavore can you add actual tests according to the test plan? Thank you?
View comment
@kavore can you add actual tests according to the test plan? Thank you?
View comment
💬 Comment on #54: Replace assertion crashes with graceful error handling for network input
@kavore there are E2E tests which run against real Telegram test servers, see previous run at https://github.com/teleproxy/teleproxy/actions/runs/24077198308/job/70228507792
They run via GitHub actions: https://github.com/teleproxy/teleproxy/blob/main/.github/workflows/test.yml
New tests to your stated issue should be added: they should fail on current Teleproxy source to ensure your PR fixes things without regressing other features. In a nutshell - the way it failed for you should be codified into a test.
While E2E test may be challenging because it's poorly documented, at least add some unit tests.
View comment
@kavore there are E2E tests which run against real Telegram test servers, see previous run at https://github.com/teleproxy/teleproxy/actions/runs/24077198308/job/70228507792
They run via GitHub actions: https://github.com/teleproxy/teleproxy/blob/main/.github/workflows/test.yml
New tests to your stated issue should be added: they should fail on current Teleproxy source to ensure your PR fixes things without regressing other features. In a nutshell - the way it failed for you should be codified into a test.
While E2E test may be challenging because it's poorly documented, at least add some unit tests.
View comment
🚀 New Release: v4.11.0
SOCKS5 upstream support in check command (#57), Cloudflare Spectrum docs (#55).
-
New
- Handle buffer allocation failures gracefully instead of crashing (#58).
- Fix PROXY protocol metrics always reporting 0 in multi-worker mode (#53).
- New deployment guide: Cloudflare Spectrum.
Release notes | GitHub
SOCKS5 upstream support in check command (#57), Cloudflare Spectrum docs (#55).
-
teleproxy check now routes DC probes through the configured SOCKS5 proxy.New
--socks5 URL CLI flag; also reads from TOML config.- Handle buffer allocation failures gracefully instead of crashing (#58).
- Fix PROXY protocol metrics always reporting 0 in multi-worker mode (#53).
- New deployment guide: Cloudflare Spectrum.
Release notes | GitHub
👍3
💬 Comment on #59: OOM / server hangs
Not a memory leak — the default
The dominant memory consumer is kernel TCP socket buffers: each open socket allocates ~46 KB of kernel memory (
Lowering the default to 10,000 in the next release. For now, set
Periodic restart shouldn't be necessary with a correct connection limit. If OOM persists after lowering it, check
Will also add a tuning guide to the docs.
View comment
Not a memory leak — the default
MAX_CONNECTIONS=60000 is too aggressive for a 2 GB machine.The dominant memory consumer is kernel TCP socket buffers: each open socket allocates ~46 KB of kernel memory (
tcp_rmem + tcp_wmem defaults), so 60k sockets alone need ~2.7 GB — more than your total RAM. The proxy has userspace protections (LRU eviction, per-connection buffer caps), but those can't control kernel-side allocation, which is what triggers OOM.Lowering the default to 10,000 in the next release. For now, set
MAX_CONNECTIONS=10000 in your Docker environment — that's safe for 2 GB and handles typical proxy loads. If you need more, scale up proportionally ((RAM_MB - 300) * 10 is a reasonable upper bound).Periodic restart shouldn't be necessary with a correct connection limit. If OOM persists after lowering it, check
net.ipv4.tcp_rmem / net.ipv4.tcp_wmem sysctl values.Will also add a tuning guide to the docs.
View comment
💬 Comment on #60: Doesn't load pics and videos by chats in direct mode
Let's keep this open for direct mode fixes
View comment
Let's keep this open for direct mode fixes
View comment