#Torch-Struct: Deep Structured Prediction Library
The literature on structured prediction for #NLP describes a rich collection of distributions and algorithms over #sequences, #segmentations, #alignments, and #trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based #frameworks. Torch-Struct includes a broad collection of #probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at:
Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1
@cedeeplearning
The literature on structured prediction for #NLP describes a rich collection of distributions and algorithms over #sequences, #segmentations, #alignments, and #trees; however, these algorithms are difficult to utilize in deep learning frameworks. We introduce Torch-Struct, a library for structured prediction designed to take advantage of and integrate with vectorized, auto-differentiation based #frameworks. Torch-Struct includes a broad collection of #probabilistic structures accessed through a simple and flexible distribution-based API that connects to any deep learning model. The library utilizes batched, vectorized operations and exploits auto-differentiation to produce readable, fast, and testable code. Internally, we also include a number of general-purpose optimizations to provide cross-algorithm efficiency. Experiments show significant performance gains over fast baselines and case-studies demonstrate the benefits of the library. Torch-Struct is available at:
Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1
@cedeeplearning
GitHub
GitHub - harvardnlp/pytorch-struct: Fast, general, and tested differentiable structured prediction in PyTorch
Fast, general, and tested differentiable structured prediction in PyTorch - harvardnlp/pytorch-struct
π»Top 10 Deep Learning Projects on #Github
The top 10 #deep_learning projects on Github include a number of #libraries, #frameworks, and education resources. Have a look at the tools others are using, and the resources they are learning from.
1. Caffe
2. Data Science IPython Notebooks
3. ConvNetJS
4. Keras
5. MXNet
6. Qix
7. Deeplearning4j
8. Machine Learning Tutorials
9. DeepLearnToolbox
10. LISA Lab Deep Learning Tutorials
link: https://www.kdnuggets.com/2016/01/top-10-deep-learning-github.html
πVia: @cedeeplearning
The top 10 #deep_learning projects on Github include a number of #libraries, #frameworks, and education resources. Have a look at the tools others are using, and the resources they are learning from.
1. Caffe
2. Data Science IPython Notebooks
3. ConvNetJS
4. Keras
5. MXNet
6. Qix
7. Deeplearning4j
8. Machine Learning Tutorials
9. DeepLearnToolbox
10. LISA Lab Deep Learning Tutorials
link: https://www.kdnuggets.com/2016/01/top-10-deep-learning-github.html
πVia: @cedeeplearning
πΉGoogle leverages computer vison to enhance the performance of robot manipulation
by Priya Dialani
The possibility that robots can figure out how to directly see the affordances of actions on objects (i.e., what the robot can or canβt do with an item) is called affordance-based manipulation, explored in research on learning complex vision-based manipulation skills including grasping, pushing, and tossing. In these #frameworks, affordances are represented as thick pixel-wise action-value maps that gauge how great it is for the #robot to execute one of a few predefined movements in every area.
ββββββββββ
πVia: @cedeeplearning
https://www.analyticsinsight.net/google-leverages-computer-vision-enhance-performance-robot-manipulation/
#computervision
#deeplearning
#neuralnetworks
#machinelearning
by Priya Dialani
The possibility that robots can figure out how to directly see the affordances of actions on objects (i.e., what the robot can or canβt do with an item) is called affordance-based manipulation, explored in research on learning complex vision-based manipulation skills including grasping, pushing, and tossing. In these #frameworks, affordances are represented as thick pixel-wise action-value maps that gauge how great it is for the #robot to execute one of a few predefined movements in every area.
ββββββββββ
πVia: @cedeeplearning
https://www.analyticsinsight.net/google-leverages-computer-vision-enhance-performance-robot-manipulation/
#computervision
#deeplearning
#neuralnetworks
#machinelearning
Analytics Insight
Google Leverages Computer Vision to Enhance the Performance of Robot Manipulation
A Google and MIT team research whether pre-trained visual representations can be utilized to improve a robot's object manipulation performance using computer vision.