Cutting Edge Deep Learning
262 subscribers
193 photos
42 videos
51 files
363 links
πŸ“• Deep learning
πŸ“— Reinforcement learning
πŸ“˜ Machine learning
πŸ“™ Papers - tools - tutorials

πŸ”— Other Social Media Handles:
https://linktr.ee/cedeeplearning
Download Telegram
Math Reference Tables πŸ“—

1. General πŸ“˜
Number Notation
Addition Table
Multiplication Table
Fraction-Decimal Conversion
Interest
Units & Measurement Conversion
2. Algebra πŸ“˜
Basic Identities
Conic Sections
Polynomials
Exponents
Algebra Graphs
Functions
3. Geometry πŸ“˜
Areas, Volumes, Surface Areas
Circles
4. Trig πŸ“˜
Identities
Tables
Hyperbolics
Graphs
Functions
5. Discrete/Linear πŸ“˜
Vectors
Recursive Formulas
Linear Algebra
6. Other πŸ“˜
Constants
Complexity
Miscellaneous
Graphs
Functions
7. Stat πŸ“˜
Distributions
8. Calc πŸ“˜
Integrals
Derivatives
Series Expansions
9. Advanced πŸ“˜
Fourier Series
Transforms

πŸ“ http://math2.org/

β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
|@machinelearning_tuts|
β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
N-shot learning
You may be asking, what the heck is a shot, anyway? Fair question.A shot is nothing more than a single example available for training, so in N-shot learning, we have N examples for training. For more information read πŸ‘‡πŸΏ

https://blog.floydhub.com/n-shot-learning/

β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
@machinelearning_tuts
NUSCCF

A new efficient subspace and K-means clustering based method to improve Collaborative Filtering

https://github.com/soran-ghadri/NUSCCF

@machinelearning_tuts
Detectron

Detectron is Facebook AI Research's software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework.

https://github.com/facebookresearch/Detectron

@machinelearning_tuts
Keras is a machine learning framework that might be your new best friend if you have a lot of data and/or you’re after the state-of-the-art in AI: deep learning. Plus, it’s the most minimalist approach to using TensorFlow, Theano, or CNTK is the high-level Keras shell.

Key Things to Know:
πŸ”΄
Keras is usable as a high-level API on top of other popular lower level libraries such as Theano and CNTK in addition to Tensorflow.
πŸ”΄ Prototyping here is facilitated to the limit. Creating massive models of deep learning in Keras is reduced to single-line functions. But this strategy makes Keras a less configurable environment than low-level frameworks.

#machinelearning
#deeplearning
#keras

@deeplearning_tuts
Channel photo updated
Channel name was changed to Β«Cutting Edge deep learningΒ»
approximately $146,085
The average machine learning salary, according to Indeed's research, is approximately $146,085 (an astounding 344% increase since 2015). The average machine learning engineer salary far outpaced other technology jobs on the list.

https://www.springboard.com/blog/machine-learning-engineer-salary-guide/

#deeplearning
#machinelearning
#averagesalary

@cedeeplearning
The adoption of ML by enterprises has reached new heights as highlighted in a recent machine learning report. Adoption has been happening at break-neck speed as companies attempt to leverage the technology to get ahead of the competition. Factors that drive the development include machine learning capabilities like risk management, performance analysis, and reporting and automation. Below are statistics on ML adoption.
βœ”οΈThe increase in ML adoption is seen to drive the cloud computing market’s growth. (teks.co.in)
βœ”οΈ1/3 of IT leaders are planning to use ML for business analytics. (statista.com)
βœ”οΈ25% of IT leaders plan to use ML for security purposes (statista.com)
βœ”οΈ16% of IT leaders want to use ML in sales and marketing. (statista.com)
βœ”οΈCapsule networks are seen to replace neural networks. (teks.co.in)

https://financesonline.com
#machinelarning
#adoption

@cedeeplearning
πŸ”»61% of marketers say AI is the most critical aspect of their data strategy. (memsql.com)
πŸ”»87% of companies who use AI plan to use them in sales forecasting and email marketing. (statista.com)
πŸ”»2000 – The estimated number of Amazon Go stores in the US by 2021. (teks.co.in)
πŸ”»49% of consumers are willing to purchase more frequently when AI is present. (twitter.com)
πŸ”»$1 billion – The amount Netflix saved from the use of machine learning algorithm. (technisider.com)
πŸ”»15 minutes – Amazon’s ship time after it started using machine learning. (aiindex.org)

https://financesonline.com/

#machinelearning
#marketofML

@cedeeplearning
1_640_50fps_FINAL_VERSION.gif
12.1 MB
πŸ”»Fast and Easy Infinitely Wide Networks with Neural Tangents

One of the key theoretical insights that has allowed us to make progress in recent years has been that increasing the width of DNNs results in more regular behavior, and makes them easier to understand. A number of recent results have shown that DNNs that are allowed to become infinitely wide converge to another, simpler, class of models called Gaussian processes. In this limit, complicated phenomena (like Bayesian inference or gradient descent dynamics of a convolutional neural network) boil down to simple linear algebra equations. Insights from these infinitely wide networks frequently carry over to their finite counterparts.

Left: A schematic showing how deep neural networks induce simple input / output maps as they become infinitely wide. Right: As the width of a neural network increases , we see that the distribution of
#deeplearning
#CNN

@cedeeplearning
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ”»More Efficient NLP Model Pre-training with ELECTRA
Recent advances in language pre-training have led to substantial gains in the field of natural language processing, with state-of-the-art models such as BERT, RoBERTa, XLNet, ALBERT, and T5, among many others. These methods, though they differ in design, share the same idea of leveraging a large amount of unlabeled text to build a general model of language understanding before being fine-tuned on specific NLP tasks such as sentiment analysis and question answering.
https://ai.googleblog.com/

#NLP
#deeplearning
#pretraining

@cedeeplearning
#Torch-Struct: Deep Structured Prediction Library

The literature on structured prediction for #NLP describes a rich collection of distributions and algorithms over #sequences, #segmentations, #alignments, and #trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based #frameworks. Torch-Struct includes a broad collection of #probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at:

Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1

@cedeeplearning
πŸ”ΉHow to Classify Photos of Dogs and Cats (with 97% accuracy)
Develop a Deep Convolutional Neural Network Step-by-Step to Classify Photographs of Dogs and Cats
by: https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-to-classify-photos-of-dogs-and-cats/

#Deeplearning
#neuralnetwork

@cedeeplearning