Python | Machine Learning | Coding | R
62.1K subscribers
1.12K photos
67 videos
140 files
769 links
List of our channels:
https://t.me/addlist/8_rRW2scgfRhOTc0

Discover powerful insights with Python, Machine Learning, Coding, and R—your essential toolkit for data-driven solutions, smart alg

Help and ads: @hussein_sheikho

https://telega.io/?r=nikapsOH
Download Telegram
Create an Audiobook in Python

More ♥️♥️ = more posts

@CodeProgrammer ♥️
Speech to Text using Python

More ♥️♥️ = more posts

@CodeProgrammer ♥️
This is all you need to train a typical image classifier using TensorFlow! 🚀

Let's break it down step-by-step and see what's happening!

More ♥️♥️ = more posts

@CodeProgrammer ♥️
Building a Convolutional Neural Network in PyTorch

https://machinelearningmastery.com/building-a-convolutional-neural-network-in-pytorch/

More ♥️♥️ = more posts

@CodeProgrammer ♥️

Don't Forget join to another channel
https://t.me/DataScienceT
How do Transformers work?

All
the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as language models. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data!

This type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called transfer learning. During this process, the model is fine-tuned in a supervised way — that is, using human-annotated labels — on a given task

🔗 Read More

🌸 https://t.me/DataScienceT