I admit I love how the google maps timeline looks
This project looks like a self hostable clone of it
https://youtu.be/345UmtRfIDU
https://github.com/Rundiz/personal-maps-timeline
This project looks like a self hostable clone of it
https://youtu.be/345UmtRfIDU
https://github.com/Rundiz/personal-maps-timeline
Brave go-sync server update
Switched to an openjdk image for dynamoDockerfile since amazon Linux image requires sse2 CPU instruction and my raspberry don't have it
Build it with
Make the dynamo database shared and persistent in docker compose
Also the proper adb command for adding the sync url to android is
You can even add it over with shizuku and aShell locally on device without root
The blog guide will be updated shortly and shared
#brave
Switched to an openjdk image for dynamoDockerfile since amazon Linux image requires sse2 CPU instruction and my raspberry don't have it
FROM openjdk:25-bookworm
# Create working space
WORKDIR /var/dynamodb_wd
# Default port for DynamoDB Local
EXPOSE 8000
# Install DynamoDB
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest
# Install ARM-compatible AWS CLI
RUN apt-get update && apt-get install -y curl unzip && \
curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip aws
# Environment variables
ENV AWS_ACCESS_KEY_ID=GOSYNC
ENV AWS_SECRET_ACCESS_KEY=GOSYNC
ARG AWS_ENDPOINT=http://localhost:8000
ARG AWS_REGION=us-west-2
ARG TABLE_NAME=client-entity-dev
# Seed Schema
COPY schema/dynamodb/ .
RUN mkdir -p /db && \
java -jar DynamoDBLocal.jar -sharedDb -dbPath /db & \
DYNAMO_PID=$! && \
sleep 15 && \
aws dynamodb create-table --cli-input-json file://table.json \
--endpoint-url http://localhost:8000 --region us-west-2 && \
aws dynamodb update-time-to-live --table-name client-entity-dev \
--time-to-live-specification "Enabled=true, AttributeName=ExpirationTime" \
--endpoint-url http://localhost:8000 && \
kill $DYNAMO_PID
CMD ["java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/db", "-port", "8000"]
Build it with
docker build -f dynamo.Dockerfile -t my-dynamo-image .
Make the dynamo database shared and persistent in docker compose
version: '3'
networks:
sync:
driver: bridge
services:
web:
build:
context: .
target: artifact
args:
VERSION: "${VERSION}"
COMMIT: "${COMMIT}"
BUILD_TIME: "${BUILD_TIME}"
ports:
- "8295:8295"
depends_on:
- dynamo-local
- redis
networks:
- sync
environment:
- PPROF_ENABLED=true
- SENTRY_DSN
- ENV=local
- DEBUG=1
- AWS_ACCESS_KEY_ID=#
- AWS_SECRET_ACCESS_KEY=#
- AWS_REGION=us-west-2
- AWS_ENDPOINT=http://dynamo-local:8000
- TABLE_NAME=client-entity-dev
- REDIS_URL=redis:6379
dynamo-local:
image: my-dynamo-image
command: java -jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal/data/
volumes:
- ./dynamodb_data:/home/dynamodblocal/data
ports:
- "8000:8000"
networks:
- sync
redis:
image: public.ecr.aws/ubuntu/redis:latest
ports:
- "6379:6379"
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
- sync
volumes:
dynamodb_data:
Also the proper adb command for adding the sync url to android is
echo -e "_\n--sync-url=http://192.168.1.24:8295/v2" | tee /data/local/tmp/chrome-command-line
You can even add it over with shizuku and aShell locally on device without root
The blog guide will be updated shortly and shared
#brave
## Tailscale
Tailscale is a secure, zero-config, modern site to site mesh network built on WireGuard. It allows you to create a private e2e encrypted network between your devices. Devices connect to each other directly using NAT traversal (like STUN + hole punching) or via encrypted relays (derp).
## Funnel
Tailscale Funnel is a feature that allows you to expose a local service (like a web app running on your Raspberry Pi) to the public internet via a Tailscale-assigned HTTPS URL (e.g., https://your-device-name.ts.net). It's ideal for sharing services without configuring port forwarding or exposing your whole network.
Funnel must be explicitly enabled.
It supports HTTPS automatically with TLS handled by Tailscale.
## tsnet.Server
tsnet.Server is a Go library provided by Tailscale that lets you embed Tailscale networking directly into your Go programs—no need to run the tailscaled daemon separately.
Key features:
Acts as a lightweight embedded Tailscale node.
You can assign it a custom Hostname.
Set srv.ListenFunnel("tcp", ":443") in its configuration to publicly expose services.
Supports custom state directory (Dir) and authentication via TS_AUTHKEY.
It’s perfect for exposing microservices securely and publicly with minimal setup.
## The problem
By default, Tailscale allows Funnels only on these 3 ports: 443,80,8080 . You can only bind to one of the allowed ports per instance . Only one Funnel per port per tailnet node is allowed. Even though it'll use the same hostname making it useless.
## The easy way:
reverse proxy internally.
It listens for incoming HTTP(S) requests and forwards them to the correct internal service based on the path.
e.g.
https://myproxy.ts.net/notepad → localhost:8081
https://myproxy.ts.net/webdav → localhost:8082
https://myproxy.ts.net/vault → localhost:8083
Creating funnels on subpaths is easy using tailscale CLI:
eg:
bg flag runs the funnel in the background and starts it on reboot
set-path flag is the self explanatory
5232 is the port your service exposes to the host.
## the subpath problem
Many applications are designed to run at the root of the domain.
Additionally apps may use relative URLs that break when accessed under a subpath. For example, links within the app may point to "/file" but expect the base domain (https://mydevice.ts.net/) instead of the subpath (https://mydevice.ts.net/notepad/), causing them to fail
That being said some popular apps support setting up subpath
For example nextcloud offer the 'overwritewebroot' flag in config.php and photoprism the PHOTOPRISM_SITE_URL environmental variable
## Solution
Spin up separate virtual devices via tsnet.Server with unique Hostname exposed via Funnel.
Example:
tsnet.Server{
Hostname: "notes",
Dir: "/state/notes",
}
With srv.ListenFunnel("tcp", ":443")
And
tsnet.Server{
Hostname: "vault",
Dir: "/state/vault",
}
With srv.ListenFunnel("tcp", ":443")
This way, you'll get:
https://vault.yourtail.ts.net
https://notes.yourtail.ts.net
Each with its own independent Funnel!
## Prerequisites
- Go
- Python
- Git
- Systemd
- Tailscale
## Automation
We will use a Python script that automates the deployment of tsnet-based services for Tailscale Funnel:
1. Configuration: It reads services.yml to get the service names, hostnames, and ports.
2. Go Binary Creation: It generates a Go binary (app) for each service that listens on a Tailscale Funnel port and proxies traffic to the service.
3. Systemd Service: It sets up systemd services to manage the Go binaries, ensuring they start on boot and restart on failure.
4. Environment Handling: It uses the .env file to pass the TS_AUTHKEY to the Go binaries.
5. Automated Deployment: The script automates creating directories, fixing permissions, building the application, and installing systemd services.
Tailscale is a secure, zero-config, modern site to site mesh network built on WireGuard. It allows you to create a private e2e encrypted network between your devices. Devices connect to each other directly using NAT traversal (like STUN + hole punching) or via encrypted relays (derp).
## Funnel
Tailscale Funnel is a feature that allows you to expose a local service (like a web app running on your Raspberry Pi) to the public internet via a Tailscale-assigned HTTPS URL (e.g., https://your-device-name.ts.net). It's ideal for sharing services without configuring port forwarding or exposing your whole network.
Funnel must be explicitly enabled.
It supports HTTPS automatically with TLS handled by Tailscale.
## tsnet.Server
tsnet.Server is a Go library provided by Tailscale that lets you embed Tailscale networking directly into your Go programs—no need to run the tailscaled daemon separately.
Key features:
Acts as a lightweight embedded Tailscale node.
You can assign it a custom Hostname.
Set srv.ListenFunnel("tcp", ":443") in its configuration to publicly expose services.
Supports custom state directory (Dir) and authentication via TS_AUTHKEY.
It’s perfect for exposing microservices securely and publicly with minimal setup.
## The problem
By default, Tailscale allows Funnels only on these 3 ports: 443,80,8080 . You can only bind to one of the allowed ports per instance . Only one Funnel per port per tailnet node is allowed. Even though it'll use the same hostname making it useless.
## The easy way:
reverse proxy internally.
It listens for incoming HTTP(S) requests and forwards them to the correct internal service based on the path.
e.g.
https://myproxy.ts.net/notepad → localhost:8081
https://myproxy.ts.net/webdav → localhost:8082
https://myproxy.ts.net/vault → localhost:8083
Creating funnels on subpaths is easy using tailscale CLI:
eg:
sudo tailscale funnel --bg --set-path /radicale http://localhost:5232
bg flag runs the funnel in the background and starts it on reboot
set-path flag is the self explanatory
5232 is the port your service exposes to the host.
## the subpath problem
Many applications are designed to run at the root of the domain.
Additionally apps may use relative URLs that break when accessed under a subpath. For example, links within the app may point to "/file" but expect the base domain (https://mydevice.ts.net/) instead of the subpath (https://mydevice.ts.net/notepad/), causing them to fail
That being said some popular apps support setting up subpath
For example nextcloud offer the 'overwritewebroot' flag in config.php and photoprism the PHOTOPRISM_SITE_URL environmental variable
## Solution
Spin up separate virtual devices via tsnet.Server with unique Hostname exposed via Funnel.
Example:
tsnet.Server{
Hostname: "notes",
Dir: "/state/notes",
}
With srv.ListenFunnel("tcp", ":443")
And
tsnet.Server{
Hostname: "vault",
Dir: "/state/vault",
}
With srv.ListenFunnel("tcp", ":443")
This way, you'll get:
https://vault.yourtail.ts.net
https://notes.yourtail.ts.net
Each with its own independent Funnel!
## Prerequisites
- Go
- Python
- Git
- Systemd
- Tailscale
## Automation
We will use a Python script that automates the deployment of tsnet-based services for Tailscale Funnel:
1. Configuration: It reads services.yml to get the service names, hostnames, and ports.
2. Go Binary Creation: It generates a Go binary (app) for each service that listens on a Tailscale Funnel port and proxies traffic to the service.
3. Systemd Service: It sets up systemd services to manage the Go binaries, ensuring they start on boot and restart on failure.
4. Environment Handling: It uses the .env file to pass the TS_AUTHKEY to the Go binaries.
5. Automated Deployment: The script automates creating directories, fixing permissions, building the application, and installing systemd services.
## Systemd.py
---
1. Read Configuration from YAML
The script reads the configuration file services.yml to extract information about each service that will be deployed. Each service must have a hostname and port defined in the configuration.
`with open("services.yml") as f:
config = yaml.safe_load(f)`
File structure
## services.yml
import yaml
import os
import subprocess
from pathlib import Path
# Read configuration
with open("services.yml") as f:
config = yaml.safe_load(f)
# Get original user
USER = os.getenv("SUDO_USER") or os.getenv("USER")
# Validate .env
env_path = Path(".env").absolute()
if not env_path.exists():
raise SystemExit("❌ .env file not found at current directory")
# Parse .env
env_vars = {}
with open(env_path) as f:
for line in f:
if "=" in line and not line.strip().startswith("#"):
key, val = line.split("=", 1)
env_vars[key.strip()] = val.strip()
if "TS_AUTHKEY" not in env_vars:
raise SystemExit("❌ TS_AUTHKEY missing in .env")
# Systemd service setup
SYSTEMD_DIR = Path("/etc/systemd/system")
# Fixed run_cmd function with cwd support
def run_cmd(cmd, cwd=None):
result = subprocess.run(
cmd,
cwd=cwd,
capture_output=True,
text=True
)
if result.returncode != 0:
raise RuntimeError(f"Command failed: {result.stderr}")
return result
for name, info in config["services"].items():
service_dir = Path(name).absolute()
hostname = info["hostname"]
port = info["port"]
print(f"\n🚀 Processing {name}")
# Create directories
(service_dir / "state").mkdir(parents=True, exist_ok=True)
# Set ownership
try:
run_cmd(["chown", "-R", f"{USER}:{USER}", str(service_dir)])
except Exception as e:
print(f"⚠️ Permission fix error: {e}")
# main.go creation
main_go = service_dir / "main.go"
main_go.write_text(f'''package main
import (
"log"
"net/http"
"net/http/httputil"
"os"
"tailscale.com/tsnet"
)
func main() {{
srv := &tsnet.Server{{
Hostname: "{hostname}",
AuthKey: os.Getenv("TS_AUTHKEY"),
Dir: "./state",
}}
defer srv.Close()
ln, err := srv.ListenFunnel("tcp", ":443")
if err != nil {{
log.Fatal(err)
}}
proxy := &httputil.ReverseProxy{{
Director: func(r *http.Request) {{
r.URL.Host = "localhost:{port}"
r.URL.Scheme = "http"
}},
}}
log.Println("Starting reverse proxy for {name}...")
log.Fatal(http.Serve(ln, proxy))
}}
''')
# Build binary
try:
print("🔨 Building binary...")
run_cmd(["go", "mod", "init", f"tsnet/{name}"], cwd=service_dir)
run_cmd(["go", "mod", "tidy"], cwd=service_dir)
run_cmd(["go", "get", "tailscale.com/tsnet"], cwd=service_dir)
run_cmd(["go", "build", "-o", "app"], cwd=service_dir)
print("✅ Build successful")
except Exception as e:
print(f"❌ Build failed: {str(e)}")
continue
# Create systemd service
service_file = service_dir / f"{name}-funnel.service"
service_content = f"""
[Unit]
Description=Tailscale Funnel Proxy for {hostname}
After=network.target
[Service]
EnvironmentFile={env_path}
WorkingDirectory={service_dir}
ExecStart={service_dir}/app
Restart=always
User={USER}
Group={USER}
[Install]
WantedBy=multi-user.target
"""
service_file.write_text(service_content.strip())
# Install service
try:
print(f"🔧 Installing {name} service...")
run_cmd(["mv", str(service_file), str(SYSTEMD_DIR)])
run_cmd(["systemctl", "daemon-reload"])
run_cmd(["systemctl", "enable", f"{name}-funnel.service"])
run_cmd(["systemctl", "start", f"{name}-funnel.service"])
print(f"✅ {name} service installed")
except Exception as e:
print(f"❌ Service installation failed: {str(e)}")
print("\n🎉 All services deployed!")
---
1. Read Configuration from YAML
The script reads the configuration file services.yml to extract information about each service that will be deployed. Each service must have a hostname and port defined in the configuration.
`with open("services.yml") as f:
config = yaml.safe_load(f)`
File structure
## services.yml
services:
photoprism:
port: 2342
hostname: photoprism
caddydav:
port: 8043
hostname: caddydav
vault:
port: 8066
hostname: vault
nginxdav:
port: 32080
hostname: nginxdav
nextcloud:
port: 8080
hostname: nextcloud
wallabag:
port: 8106
hostname: wallabag
radicale:
port: 5233
hostname: radicale
baikal:
port: 8456
hostname: baikal
2. Validate .env File
Validates that the .env file exists in the current directory. The .env file should contain a TS_AUTHKEY key (Tailscale authentication key).
To generate an auth key:
Open the Keys page of the admin console.
https://login.tailscale.com/admin/settings/keys
Select Generate auth key.
Fill out the form
Select Pre-approved
Select Generate key.
Copy the key to .env
3. Systemd Setup
The script defines a path to the systemd directory (/etc/systemd/system), where service files will be installed.
SYSTEMD_DIR = Path("/etc/systemd/system")
The generated unit file is in the format:
## service-funnel.service
[Unit]
Description=Tailscale Funnel Proxy for wallabag
After=network.target
[Service]
EnvironmentFile=/run/media/ippo/TOSHIBA/tsnet-funnel/stack/.env
WorkingDirectory=/run/media/ippo/TOSHIBA/tsnet-funnel/stack/wallabag
ExecStart=/run/media/ippo/TOSHIBA/tsnet-funnel/stack/wallabag/app
Restart=always
User=ippo
Group=ippo
[Install]
WantedBy=multi-user.target
4. Helper Function: run_cmd
The run_cmd function is used to run shell commands (subprocess.run). It includes a cwd argument to specify the working directory for commands. If a command fails, it raises an error with the command's stderr.
5. Iterate Over Each Service
The script loops through each service defined in services.yml, processing each one by:
Creating Directories: It creates a state directory for storing service state.
Fixing Permissions: It attempts to set the ownership of the service directory to the current user.
Generating main.go: It writes a Go file (main.go) that sets up a tsnet.Server to listen on Tailscale's Funnel port (:443) and reverse proxy traffic to the specified service on localhost:{port}.
Building the Go Binary: It uses go commands to build a binary for the service.
main_go = service_dir / "main.go"
main_go.write_text(f'''package main
// Go code to set up tsnet reverse proxy
''')
That generated main.go app is in the format:
## main.go
package main
import (
"log"
"net/http"
"net/http/httputil"
"os"
"tailscale.com/tsnet"
)
func main() {
srv := &tsnet.Server{
Hostname: "vault",
AuthKey: os.Getenv("TS_AUTHKEY"),
Dir: "./state",
}
defer srv.Close()
ln, err := srv.ListenFunnel("tcp", ":443")
if err != nil {
log.Fatal(err)
}
proxy := &httputil.ReverseProxy{
Director: func(r *http.Request) {
r.URL.Host = "localhost:8066"
r.URL.Scheme = "http"
},
}
log.Println("Starting reverse proxy for vault...")
log.Fatal(http.Serve(ln, proxy))
}
6. Build Go Binary
The script runs several go commands to initialize the module, fetch dependencies (including Tailscale), and build the Go binary (app) for each service.
run_cmd(["go", "mod", "init", f"tsnet/{name}"], cwd=service_dir)
run_cmd(["go", "build", "-o", "app"], cwd=service_dir)
7. Create and Install systemd Service
For each service, a systemd service file is generated. This service file:
Sets the environment file to .env.
Defines the service to execute the Go binary (app).
Configures the service to restart on failure.
Installs the service by moving the file to the systemd directory and enabling and starting the service with systemctl.
service_file = service_dir / f"{name}-funnel.service"
service_content = f"""
[Unit]
Description=Tailscale Funnel Proxy for {hostname}
After=network.target
[Service]
EnvironmentFile={env_path}
WorkingDirectory={service_dir}
ExecStart={service_dir}/app
Restart=always
User={USER}
Group={USER}
[Install]
WantedBy=multi-user.target
"""
The service file is written to disk, moved to /etc/systemd/system, and then installed using systemctl.
run_cmd(["mv", str(service_file), str(SYSTEMD_DIR)])
run_cmd(["systemctl", "daemon-reload"])
run_cmd(["systemctl", "enable", f"{name}-funnel.service"])
run_cmd(["systemctl", "start", f"{name}-funnel.service"])
8. Final Output
Once all services are processed and installed, a success message is printed.
print("\n🎉 All services deployed!")
---
## summary
You need 3 files
- The systemd.py script
- The services.yml
- the .env
You run the script
That's all
Easy..
Your funnels are live at their subdomains at
You can check the subdomain of each service at the admin panel on your tailscale account page or from the
To read the full json
And to get a specific subdomain for a specific host
eg if you have a host name "vault" in you services.yml to see the generated subdomain for it.
Or get all of them
run_cmd(["mv", str(service_file), str(SYSTEMD_DIR)])
run_cmd(["systemctl", "daemon-reload"])
run_cmd(["systemctl", "enable", f"{name}-funnel.service"])
run_cmd(["systemctl", "start", f"{name}-funnel.service"])
8. Final Output
Once all services are processed and installed, a success message is printed.
print("\n🎉 All services deployed!")
---
## summary
You need 3 files
- The systemd.py script
- The services.yml
- the .env
You run the script
sudo python3 systemd.py
That's all
Easy..
Your funnels are live at their subdomains at
hostname.tailscale_host.ts.net
You can check the subdomain of each service at the admin panel on your tailscale account page or from the
tailscale status --json
To read the full json
tailscale status --json
And to get a specific subdomain for a specific host
eg if you have a host name "vault" in you services.yml to see the generated subdomain for it.
tailscale status --json | jq -r --arg hostname "vault" '.Peer[] | select(.HostName == $hostname) | .DNSName'
Or get all of them
grep 'hostname:' services.yml | awk '{print $2}' | xargs -I{} sh -c 'echo -n "{}: "; tailscale status --json | jq -r --arg hostname "{}" ".Peer[] | select(.HostName == \$hostname) | .DNSName"'
Switched to
arm64v8/amazoncorretto:11
Fixed 500 errors
Added persisted db
https://ippocratis.github.io/brave/
arm64v8/amazoncorretto:11
Fixed 500 errors
Added persisted db
https://ippocratis.github.io/brave/
srt offset
pip install pysrt
python3 -c '
import os
import pysrt
offset_ms = -18000 # 18 seconds = 18000 milliseconds
for filename in os.listdir("."):
if filename.endswith(".srt"):
subs = pysrt.open(filename)
subs.shift(milliseconds=offset_ms)
subs.save("shifted_" + filename)
'
pip install pysrt
python3 -c '
import os
import pysrt
offset_ms = -18000 # 18 seconds = 18000 milliseconds
for filename in os.listdir("."):
if filename.endswith(".srt"):
subs = pysrt.open(filename)
subs.shift(milliseconds=offset_ms)
subs.save("shifted_" + filename)
'
One click remote Dir panic wipe
ssh-keygen -t rsa -b 4096
chmod +x ~/.shortcuts/delete_mydir.sh
ssh-keygen -t rsa -b 4096
ssh-copy-id ippo@192.168.1.24
mkdir -p ~/.shortcuts
#!/data/data/com.termux/files/usr/bin/bash
# Εκτέλεση απομακρυσμένης εντολής στο παρασκήνιο
nohup ssh ippo@192.168.1.24 'find "/run/media/ippo/TOSHIBA/mydir" -type f -exec shred -uz {} \; && rm -rf "/run/media/ippo/TOSHIBA/mydir"' > /dev/null 2>&1 &
# Ειδοποίηση μετά την εκκίνηση
termux-toast "✅ Διαγραφή ξεκίνησε στο παρασκήνιο"
chmod +x ~/.shortcuts/delete_mydir.sh
pkg install termux-api
Local lan https
Caddyfile
You may need to install the caddy root cert on the client device
Caddyfile
https://192.168.1.24:8443 {
reverse_proxy 127.0.0.1:32080
tls internal
}
You may need to install the caddy root cert on the client device
/var/lib/caddy/pki/authorities/local/root.crt
wish there was a way to terminate tsl on mtls on funnels too
Neat setup though
Force requiring client cert from clients in the tailet
https://vpetersson.com/2024/05/29/tailscale-and-mutual-tls.html
Neat setup though
Force requiring client cert from clients in the tailet
https://vpetersson.com/2024/05/29/tailscale-and-mutual-tls.html
Viktor's Tech Musings & Security Paranoia
Secure your Tailscale infra further with Mutual TLS (mTLS)
Insights on DevSecOps, cloud architecture, and building secure, scalable systems. Sharing practical experience from running tech companies and implementing software supply chain security at scale.