Artem Ryblov’s Data Science Weekly
580 subscribers
127 photos
152 links
@artemfisherman’s Data Science Weekly: Elevate your expertise with a standout data science resource each week, carefully chosen for depth and impact.

Long-form content: https://artemryblov.substack.com
Download Telegram
Dive into Deep Learning

- Interactive deep learning book with code, maths, and discussions.
- Implemented with PyTorch, NumPy/MXNet, JAX, and TensorFlow.
- Adopted at 400 universities from 60 countries.

Content and Structure

The book can be divided into roughly three parts, focusing on preliminaries, deep learning techniques, and advanced topics focused on real systems and applications:

Part 1: Basics and Preliminaries. Section 1 offers an introduction to deep learning. Then, in Section 2, we quickly bring you up to speed on the prerequisites required for hands-on deep learning, such as how to store and manipulate data, and how to apply various numerical operations based on basic concepts from linear algebra, calculus, and probability. Section 3 and Section 5 cover the most basic concepts and techniques in deep learning, including regression and classification; linear models; multilayer perceptrons; and overfitting and regularization.

Part 2: Modern Deep Learning Techniques. Section 6 describes the key computational components of deep learning systems and lays the groundwork for our subsequent implementations of more complex models. Next, Section 7 and Section 8 introduce convolutional neural networks (CNNs), powerful tools that form the backbone of most modern computer vision systems. Similarly, Section 9 and Section 10 introduce recurrent neural networks (RNNs), models that exploit sequential (e.g., temporal) structure in data and are commonly used for natural language processing and time series prediction. In Section 11, we introduce a relatively new class of models based on so-called attention mechanisms that has displaced RNNs as the dominant architecture for most natural language processing tasks. These sections will bring you up to speed on the most powerful and general tools that are widely used by deep learning practitioners.

Part 3: Scalability, Efficiency, and Applications. In Section 12, we discuss several common optimization algorithms used to train deep learning models. Next, in Section 13, we examine several key factors that influence the computational performance of deep learning code. Then, in Section 14, we illustrate major applications of deep learning in computer vision. Finally, in Section 15 and Section 16, we demonstrate how to pretrain language representation models and apply them to natural language processing tasks. This part is available online.

Navigational hashtags: #armknowledgesharing #armbooks #armcourses
General hashtags: #deeplearning #dl #tensorflow #pytorch #jax #numpy #computervision #naturallanguageprocessing #attention #neuralnetworks #algorithms

@data_science_weekly
👍1