This media is not supported in your browser
VIEW IN TELEGRAM
Meta has shared their Audiocraft neural network β it can create music based on a text description or a test segment πΆ
It works impressively, expect an inflow of beatmakers with samples from classical music.
#Meta β‘οΈ #Audio β‘οΈ #NeuralNetwork
It works impressively, expect an inflow of beatmakers with samples from classical music.
#Meta β‘οΈ #Audio β‘οΈ #NeuralNetwork
π7
This media is not supported in your browser
VIEW IN TELEGRAM
The new AudioLDM 2 neural network generates any sounds, sound effects, speech and even music in different styles.
The model is based on a universal representation of sound, allowing large-scale self-supervised pre-training of the basic latent diffusion model without sound annotation and combining the advantages of autoregressive and latent diffusion models.
One more thing: it knows how to generate sound from pictures.
Source code repo.
#service #useful #audio #aigeneration
The model is based on a universal representation of sound, allowing large-scale self-supervised pre-training of the basic latent diffusion model without sound annotation and combining the advantages of autoregressive and latent diffusion models.
One more thing: it knows how to generate sound from pictures.
Source code repo.
#service #useful #audio #aigeneration
π4