Cutting Edge Deep Learning
262 subscribers
193 photos
42 videos
51 files
363 links
📕 Deep learning
📗 Reinforcement learning
📘 Machine learning
📙 Papers - tools - tutorials

🔗 Other Social Media Handles:
https://linktr.ee/cedeeplearning
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
Nothing can break through this security barrier. Learn more at https://www.hesco.com
----------
@machinelearning_tuts
This media is not supported in your browser
VIEW IN TELEGRAM
This mini washing machine uses no electricity. Learn more at https://www.yirego.com/
----------
@machinelearning_tuts
This media is not supported in your browser
VIEW IN TELEGRAM
Meet the world's first glass-free foldable smartphone. Learn more at https://www.royole.com/flexpai
----------
@machinelearning_tuts
Another sneak preview into TensorFlow 2.0. This is how the new architecture will look like:
TensorFlow 2.0 will focus on simplicity and ease of use, featuring updates like:

Easy model building with Keras and eager execution.
Robust model deployment in production on any platform.
Powerful experimentation for research.
Simplifying the API by cleaning up deprecated APIs and reducing duplication.

1. tf.data will replace the queue runners
2. Easy model building with tf.keras and estimators
3. Run and debug with eager execution
4. Distributed training on either CPU, GPU or TPU
5. Export models to SavedModel and deploy it via TF Serving, TF Lite. TF.js etc.

I really can't wait anymore to test all the new things out.

#deeplearning #machinelearning

Article: https://lnkd.in/drz7FyV
----------
@machinelearning_tuts
❇️ Machine learning glossary

#DataScience #MachineLearning
----------
@machinelearning_tuts
Constrained clustering, #python implementation
----------
@machinelearning_tuts
Fortifying the future of cryptography

Vinod Vaikuntanathan aims to improve encryption in a world with growing applications and evolving adversaries.


January 16, 2019

@machinelearning_tuts
----------
Link : http://news.mit.edu//2019/faculty-vinod-vaikuntanathan-0116
A Unified Framework of Deep Neural Networks by Capsules

--Abstract

With the growth of deep learning, how to describe deep neural networksunifiedly is becoming an important issue. We first formalize neural networksmathematically with their directed graph representations, and prove ageneration theorem about the induced networks of connected directed acyclicgraphs. Then, we set up a unified framework for deep learning with capsulenetworks. This capsule framework could simplify the description of existingdeep neural networks, and provide a theoretical basis of graphic designing andprogramming techniques for deep learning models, thus would be of greatsignificance to the advancement of deep learning.


2018-05-09T14:23:17Z

@machinelearning_tuts
----------
Link : http://arxiv.org/abs/1805.03551v2
Deep Learning for Sentiment Analysis : A Survey

--Abstract

Deep learning has emerged as a powerful machine learning technique thatlearns multiple layers of representations or features of the data and producesstate-of-the-art prediction results. Along with the success of deep learning inmany other application domains, deep learning is also popularly used insentiment analysis in recent years. This paper first gives an overview of deeplearning and then provides a comprehensive survey of its current applicationsin sentiment analysis.


2018-01-24T07:32:29Z

@machinelearning_tuts
----------
Link : http://arxiv.org/abs/1801.07883v2
Integrating Learning and Reasoning with Deep Logic Models

--Abstract

Deep learning is very effective at jointly learning feature representationsand classification models, especially when dealing with high dimensional inputpatterns. Probabilistic logic reasoning, on the other hand, is capable to takeconsistent and robust decisions in complex environments. The integration ofdeep learning and logic reasoning is still an open-research problem and it isconsidered to be the key for the development of real intelligent agents. Thispaper presents Deep Logic Models, which are deep graphical models integratingdeep learning and logic reasoning both for learning and inference. Deep LogicModels create an end-to-end differentiable architecture, where deep learnersare embedded into a network implementing a continuous relaxation of the logicknowledge. The learning process allows to jointly learn the weights of the deeplearners and the meta-parameters controlling the high-level reasoning. Theexperimental results show that the proposed methodology overtakes thelimitations of the other approaches that have been proposed to bridge deeplearning and reasoning.


2019-01-14T09:06:28Z

@machinelearning_tuts
----------
Link : http://arxiv.org/abs/1901.04195v1
Why walk when you can flop?
In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.

Blog by Janelle Shane: https://lnkd.in/dQnCVa9

Original paper: https://lnkd.in/dt63hJR

#algorithm #artificialintelligence #machinelearning #reinforcementlearning #technology

----------
@machinelearning_tuts
DeepFlash is a nice application of auto-encoders where they trained a neural network to turn a flash selfie into a studio portrait. It's an interesting paper with a real need, I seriously mean it! They've also tested their results against other approaches like pix2pix, style transfer etc.. Somehow from the first glance I had the feeling that pix2pix performed better than their suggested approach but their evaluation metrics (SSIM and PSNR) proved me wrong.
#deeplearning #machinelearning

Paper link: https://lnkd.in/eHM5rRx

----------
@machinelearning_tuts
Machine Learning Guide: 20 Free ODSC Resources to Learn Machine Learning: https://lnkd.in/ejqejpA

#BigData #DataScience #DataScientists #AI #DeepLearning


----------
@machinelearning_tuts
How do you go from self-play to the real world? : Transfer learning

NeurIPS 2017 Meta Learning Symposium: https://lnkd.in/e7MdpPc

A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation.

#artificialintelligence #deeplearning #metalearning #reinforcementlearning
----------
@machinelearning_tuts