Cutting Edge Deep Learning
262 subscribers
193 photos
42 videos
51 files
363 links
πŸ“• Deep learning
πŸ“— Reinforcement learning
πŸ“˜ Machine learning
πŸ“™ Papers - tools - tutorials

πŸ”— Other Social Media Handles:
https://linktr.ee/cedeeplearning
Download Telegram
#Torch-Struct: Deep Structured Prediction Library

The literature on structured prediction for #NLP describes a rich collection of distributions and algorithms over #sequences, #segmentations, #alignments, and #trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based #frameworks. Torch-Struct includes a broad collection of #probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at:

Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1

@cedeeplearning
πŸ”»Top 10 Deep Learning Projects on #Github

The top 10 #deep_learning projects on Github include a number of #libraries, #frameworks, and education resources. Have a look at the tools others are using, and the resources they are learning from.
1. Caffe
2. Data Science IPython Notebooks
3. ConvNetJS
4. Keras
5. MXNet
6. Qix
7. Deeplearning4j
8. Machine Learning Tutorials
9. DeepLearnToolbox
10. LISA Lab Deep Learning Tutorials

link: https://www.kdnuggets.com/2016/01/top-10-deep-learning-github.html

πŸ“ŒVia: @cedeeplearning
πŸ”ΉGoogle leverages computer vison to enhance the performance of robot manipulation

by Priya Dialani

The possibility that robots can figure out how to directly see the affordances of actions on objects (i.e., what the robot can or can’t do with an item) is called affordance-based manipulation, explored in research on learning complex vision-based manipulation skills including grasping, pushing, and tossing. In these #frameworks, affordances are represented as thick pixel-wise action-value maps that gauge how great it is for the #robot to execute one of a few predefined movements in every area.
β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”
πŸ“ŒVia: @cedeeplearning

https://www.analyticsinsight.net/google-leverages-computer-vision-enhance-performance-robot-manipulation/

#computervision
#deeplearning
#neuralnetworks
#machinelearning