Please open Telegram to view this post
VIEW IN TELEGRAM
❤4😁1
Mira
this is my room at night when i say lemme take you to the dance floor, i mean it literally
my room for a reference
did you know when NADH delivers electrons to Complex I, Complex I pumps 4 protons? i didn't either 🚬
quiz support is integrated now
quiz support is integrated now
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥10
Forwarded from 𝕶𝖆𝖑 .𝕯. 𝕶𝖗 (Ϩⲉⲛⲃⲟⲛ Ⲍⲁⲕ𐌵ꞅⲁ)🙃®
Mira
NADH ❌❌ NERD AHH✅✅
Nerdy And Dyslexic Hacker aka pixie
😁2😭2
i don't even relate to them, but these lyrics making a nigga feel it. i should actually tag them. #holeheart, cuz why not
❤6
just learned being short is a genuine red flag. thank God i am not short and fat 🎧
Please open Telegram to view this post
VIEW IN TELEGRAM
👀11🤣10🤡2
Mira
just learned being short is a genuine red flag. thank God i am not short and fat 🎧
whatchu guys starin at. i ain't the one who's short
😭5😁2
This media is not supported in your browser
VIEW IN TELEGRAM
remembered i had a free will and was experimenting with this last night. it's an audio visualizer purely made with web audio API and raw canvas API for the visuals.
how it works
it has two main files,
AudioBufferSourceNode (Player) --> AnalyserNode (FFT) --> GainNode (Volume) --> AudioContext.destination (Speakers).
it uses
as i said, the visualizer engine depends on canvas 2D API. it runs on a dedicated
this is like an overview of it. i didn't know the concepts before. it's kinda nice what you can build these days with a limited set of knowledge and curiosity. i can make this a native desktop app and monitor output audio from spotify. it'd be nice to put it on the background while working or smtn
how it works
it has two main files,
AudioController.ts and VisualizerEngine.ts. the audio controller relies on web audio API and has a signal path: AudioBufferSourceNode (Player) --> AnalyserNode (FFT) --> GainNode (Volume) --> AudioContext.destination (Speakers).
it uses
getByteFrequencyData to extract the spectrum and getByteTimeDomainData to extract the waveform (oscilloscope data). the engine manually iterates through specific indices of the frequency array to calculate average energy for bass, mids, and treble. this normalized data is used for the visual parameters then.as i said, the visualizer engine depends on canvas 2D API. it runs on a dedicated
requestAnimationFrame loop to make sure a consistent 60fps (or 144fps on high-refresh monitors).this is like an overview of it. i didn't know the concepts before. it's kinda nice what you can build these days with a limited set of knowledge and curiosity. i can make this a native desktop app and monitor output audio from spotify. it'd be nice to put it on the background while working or smtn
⚡9👍2❤1
finna take a walk and look at the rich guy's compound and his generator 🎧
Please open Telegram to view this post
VIEW IN TELEGRAM
🤣7😁1