Cutting Edge Deep Learning
262 subscribers
193 photos
42 videos
51 files
363 links
📕 Deep learning
📗 Reinforcement learning
📘 Machine learning
📙 Papers - tools - tutorials

🔗 Other Social Media Handles:
https://linktr.ee/cedeeplearning
Download Telegram
Detectron

Detectron is Facebook AI Research's software system that implements state-of-the-art object detection algorithms, including Mask R-CNN. It is written in Python and powered by the Caffe2 deep learning framework.

https://github.com/facebookresearch/Detectron

@machinelearning_tuts
Keras is a machine learning framework that might be your new best friend if you have a lot of data and/or you’re after the state-of-the-art in AI: deep learning. Plus, it’s the most minimalist approach to using TensorFlow, Theano, or CNTK is the high-level Keras shell.

Key Things to Know:
🔴
Keras is usable as a high-level API on top of other popular lower level libraries such as Theano and CNTK in addition to Tensorflow.
🔴 Prototyping here is facilitated to the limit. Creating massive models of deep learning in Keras is reduced to single-line functions. But this strategy makes Keras a less configurable environment than low-level frameworks.

#machinelearning
#deeplearning
#keras

@deeplearning_tuts
Channel photo updated
Channel name was changed to «Cutting Edge deep learning»
approximately $146,085
The average machine learning salary, according to Indeed's research, is approximately $146,085 (an astounding 344% increase since 2015). The average machine learning engineer salary far outpaced other technology jobs on the list.

https://www.springboard.com/blog/machine-learning-engineer-salary-guide/

#deeplearning
#machinelearning
#averagesalary

@cedeeplearning
The adoption of ML by enterprises has reached new heights as highlighted in a recent machine learning report. Adoption has been happening at break-neck speed as companies attempt to leverage the technology to get ahead of the competition. Factors that drive the development include machine learning capabilities like risk management, performance analysis, and reporting and automation. Below are statistics on ML adoption.
✔️The increase in ML adoption is seen to drive the cloud computing market’s growth. (teks.co.in)
✔️1/3 of IT leaders are planning to use ML for business analytics. (statista.com)
✔️25% of IT leaders plan to use ML for security purposes (statista.com)
✔️16% of IT leaders want to use ML in sales and marketing. (statista.com)
✔️Capsule networks are seen to replace neural networks. (teks.co.in)

https://financesonline.com
#machinelarning
#adoption

@cedeeplearning
🔻61% of marketers say AI is the most critical aspect of their data strategy. (memsql.com)
🔻87% of companies who use AI plan to use them in sales forecasting and email marketing. (statista.com)
🔻2000 – The estimated number of Amazon Go stores in the US by 2021. (teks.co.in)
🔻49% of consumers are willing to purchase more frequently when AI is present. (twitter.com)
🔻$1 billion – The amount Netflix saved from the use of machine learning algorithm. (technisider.com)
🔻15 minutes – Amazon’s ship time after it started using machine learning. (aiindex.org)

https://financesonline.com/

#machinelearning
#marketofML

@cedeeplearning
1_640_50fps_FINAL_VERSION.gif
12.1 MB
🔻Fast and Easy Infinitely Wide Networks with Neural Tangents

One of the key theoretical insights that has allowed us to make progress in recent years has been that increasing the width of DNNs results in more regular behavior, and makes them easier to understand. A number of recent results have shown that DNNs that are allowed to become infinitely wide converge to another, simpler, class of models called Gaussian processes. In this limit, complicated phenomena (like Bayesian inference or gradient descent dynamics of a convolutional neural network) boil down to simple linear algebra equations. Insights from these infinitely wide networks frequently carry over to their finite counterparts.

Left: A schematic showing how deep neural networks induce simple input / output maps as they become infinitely wide. Right: As the width of a neural network increases , we see that the distribution of
#deeplearning
#CNN

@cedeeplearning
This media is not supported in your browser
VIEW IN TELEGRAM
🔻More Efficient NLP Model Pre-training with ELECTRA
Recent advances in language pre-training have led to substantial gains in the field of natural language processing, with state-of-the-art models such as BERT, RoBERTa, XLNet, ALBERT, and T5, among many others. These methods, though they differ in design, share the same idea of leveraging a large amount of unlabeled text to build a general model of language understanding before being fine-tuned on specific NLP tasks such as sentiment analysis and question answering.
https://ai.googleblog.com/

#NLP
#deeplearning
#pretraining

@cedeeplearning
#Torch-Struct: Deep Structured Prediction Library

The literature on structured prediction for #NLP describes a rich collection of distributions and algorithms over #sequences, #segmentations, #alignments, and #trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based #frameworks. Torch-Struct includes a broad collection of #probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at:

Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1

@cedeeplearning
🔹How to Classify Photos of Dogs and Cats (with 97% accuracy)
Develop a Deep Convolutional Neural Network Step-by-Step to Classify Photographs of Dogs and Cats
by: https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-to-classify-photos-of-dogs-and-cats/

#Deeplearning
#neuralnetwork

@cedeeplearning
What does it mean to be a data scientist? After all, there are many different skills that fall under the umbrella of data science. The professionals’ job role was logically related to their proficiency in different skills.
link: https://www.business2community.com/big-data/investigating-data-scientists-their-skills-and-team-makeup-01335085

via: @cedeeplearning
🔹Design your Neural Networks
What’s a good learning rate? How many hidden layers should your network have? Is dropout actually useful? Why are your gradients vanishing?
link: https://towardsdatascience.com/designing-your-neural-networks-a5e4617027ed

#neuralnetwork
#machinelearning

via: @cedeeplearning
Tinker With a Neural Network Right Here in Your Browser

This was created by Daniel Smilkov and Shan Carter. This is a continuation of many people’s previous work — most notably Andrej Karpathy’s #convnet.js demo and Chris Olah’s articles about neural networks.

Via: @cedeeplearning

https://playground.tensorflow.org/
#visualization #neural_networks
The advent of AI in #Architecture is still in its early days but offers promising results. More than a mere opportunity, such potential represents for us a major step ahead, about to reshape the architectural discipline.

Via: @cedeeplearning

Check this interesting post out:
https://towardsdatascience.com/ai-architecture-f9d78c6958e0