idea for creating a circular buffer for desktop recordings using ffmpeg is feasible! This concept is similar to how DVRs or security cameras with a "looping" or circular buffer function operate.
/tmp/desktop_recording000.mp4 & /tmp/desktop_recording001.mp4
at 600-second intervals, so you can always look at dementia
using nvidia hevc codec
—— upd
with HLS stream even bettter
gpu-screen-recorder-gtk - doing all this but better
ffmpeg -f x11grab -r 25 -s $(xdpyinfo | grep dimensions | awk '{print $2}') -i :0.0 -c:v hevc_nvenc -qp:v 20 -af hwac3 -b:a 96k -f segment -segment_time 600 -segment_wrap 2 -segment_format_options movflags=+faststart -reset_timestamps 1 /tmp/desktop_recording%03d.mp4
continuously record the desktop in 2 files: /tmp/desktop_recording000.mp4 & /tmp/desktop_recording001.mp4
at 600-second intervals, so you can always look at dementia
using nvidia hevc codec
—— upd
with HLS stream even bettter
ffmpeg -f x11grab -r 25 -s $(xdpyinfo | grep dimensions | awk '{print $2}') -i :0.0 -c:v hevc_nvenc -qp:v 20 -af hwac3 -b:a 96k -f hls -hls_time 5 -hls_list_size 0 -hls_flags delete_segments -hls_playlist_type event /tmp/output.m3u8
___upd3gpu-screen-recorder-gtk - doing all this but better
🔥1
static black screen for 10hours and 1 second:
(around 6mb)
ffmpeg -f lavfi -i color=c=black:s=1920x1080:r=1 -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=8000 -t 10:00:01 -c:v libx264 -c:a aac -b:v 1k -b:a 1k -crf 63 -movflags +faststart -pix_fmt yuv420p -y output4.mp4
(around 6mb)
huge hack
Extract an image from a raw disk and transfer it to an erofsimage format
(you were unable to find anything similar on the internet)
Extract an image from a raw disk and transfer it to an erofsimage format
guestfish --ro -i tar-out -a debian-10.img / - | mkfs.erofs --all-root --gzip --tar=- debian2.erofs
(you were unable to find anything similar on the internet)
shitty regexp, get second interface name in your system:
(not well tested)
```
----
(not well tested)
```
ip -c=never -o link show | grep -oP '(?<=^2:\s)\w+'```
----
-c=neveris proper arg and should be there
The unproper way to get 10./8 IPs:
printf "%d.%d.%d.%d\n" "$((10 % 255))" "$((RANDOM % 255))" "$((RANDOM % 255))" "$((RANDOM % 255))"
try entering this text into the search bar of any web browser::
y will get editable notepad
data:text/html, <html contenteditable>
Using windows powershell cut first 512 bytes of files in C:\123\123\ folder.
$PATH = "C:\123\123\"
$BYTES_TO_TRIM = 512
$files = dir $PATH | where { !$_.PsIsContainer }
foreach ($file in $files) {
Write-Output "File being truncated: $($file.FullName)"
Write-Output " Original Size: $($file.Length) bytes"
Write-Output " Truncating $BYTES_TO_TRIM bytes..."
$byteEncodedContent = [System.IO.File]::ReadAllBytes($file.FullName)
$truncatedByteEncodedContent = $byteEncodedContent[$BYTES_TO_TRIM..($byteEncodedContent.Length - 1)]
Set-Content -value $truncatedByteEncodedContent -encoding byte -path "$($file.FullName)"
Write-Output " Size after truncation: $((Get-Item $file.FullName).Length) bytes"
Write-Output "Truncation done!`n"
}
🔥1
Acting as a WHIP forwarder
This example requires port 5000 UDP to be open for inbound traffic
$ donut ./donut -whip-uri https://galene.pi.pe/group/whip/ -srt-uri "srt://0.0.0.0:5000?streamid=test" \
-whip-token ${bearertoken}
You can then send multiple SRT streams to be forwarded as WHIP examples: Camera from a mac:
ffmpeg -f avfoundation -framerate 30 -i "0" \
-pix_fmt yuv420p -c:v libx264 -b:v 1000k -g 30 -keyint_min 120 -profile:v baseline \
-preset veryfast -f mpegts "srt://${bridgeIP}:5000?streamid=me"
Or from a raspi
ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 \
-i /dev/video0 -vcodec copy -an -f mpegts "srt://${bridgeIP}:5000?streamid=picam"
Idea for dedicated stream/grabber:
https://github.com/giongto35/cloud-morph/wiki/Deep-Dive-Into-Codebase
https://github.com/giongto35/cloud-morph/wiki/Deep-Dive-Into-Codebase
[program:Xvfb] command=/usr/bin/Xvfb :99: Create Virtual Video Buffer at port :99
[program:pulseaudio] command=pulseaudio: Create Virtual Audio (pulse). Don't need to bind to any port, app still can output audio to it.
[program:syncinput] command=wine syncinput.exe : Script to listen from Web Message to simulate Windows OS Event
[program:wineapp] command=wine %(ENV_appfile)s environment=DISPLAY=:99 : Run Wine app and attach virtual DISPLAY :99 to it
[program:ffmpeg] command=ffmpeg -f x11grab -i :99 -c:v libvpx -f rtp rtp://%(ENV_dockerhost)s:5004 : Video encoding read from Virtual display and export Video RTP stream at port 5004
[program:ffmpegaudio] command=ffmpeg -f pulse -rtp rtp://%(ENV_dockerhost)s:4004 : Audio encoding read from Virtual Audio and export Audio RTP stream at port 4004
>Kindly reconsider your security:
——
(ssh anonce your pub-keys to server by default, to work on unsecure envs try ssh with dev null):
ssh whoami.filippo.io
——
(ssh anonce your pub-keys to server by default, to work on unsecure envs try ssh with dev null):
ssh whoami.filippo.io -i /dev/null