Machine learning datasets: A list of the biggest machine learning datasets from across the web.
https://lnkd.in/e7WZFTw
❇️ @AI_Python_EN
https://lnkd.in/e7WZFTw
❇️ @AI_Python_EN
ARTIFICIAL INTELLIGENCE 101 "AI 101: The First World-Class Overview of AI for All." 1) AI 101 CheatSheet:
https://lnkd.in/eXY_q_C 2)
Curated Open-Source Codes:
https://lnkd.in/dWUwH-Z
❇️ @AI_Python_EN
https://lnkd.in/eXY_q_C 2)
Curated Open-Source Codes:
https://lnkd.in/dWUwH-Z
❇️ @AI_Python_EN
Model interpretation and feature importance is a key for #datascientists to learn when running #machinelearing models. Here is a snippet from the #Genomics perspective.
a) Feature importance scores highlight parts of the input most predictive for the output. For DNA sequence-based models, these can be visualized as a sequence logo of the input sequence, with letter heights proportional to the feature importance score, which may also be negative (as visualized by letters facing upside down).
b ) Perturbation-based approaches perturb each input feature (left) and record the change in model prediction (centre) in the feature importance matrix (right). For DNA sequences, the perturbations correspond to single base substitutions.
c) Backpropagation- based approaches compute the feature importance scores using gradients or augmented gradients such as DeepLIFT (Deep Learning Important FeaTures)* for the input features with respect to model prediction.
Link to this lovely paper:
https://lnkd.in/dfmvP9c
❇️ @AI_Python_EN
a) Feature importance scores highlight parts of the input most predictive for the output. For DNA sequence-based models, these can be visualized as a sequence logo of the input sequence, with letter heights proportional to the feature importance score, which may also be negative (as visualized by letters facing upside down).
b ) Perturbation-based approaches perturb each input feature (left) and record the change in model prediction (centre) in the feature importance matrix (right). For DNA sequences, the perturbations correspond to single base substitutions.
c) Backpropagation- based approaches compute the feature importance scores using gradients or augmented gradients such as DeepLIFT (Deep Learning Important FeaTures)* for the input features with respect to model prediction.
Link to this lovely paper:
https://lnkd.in/dfmvP9c
❇️ @AI_Python_EN
Understanding the Backpropagation Algorithm.
#BigData #DataScience #AI #MachineLearning #IoT #IIoT #PyTorch #Python #TensorFlow #CloudComputing #Algorithms
http://bit.ly/2ASKwqx
❇️ @AI_Python_EN
#BigData #DataScience #AI #MachineLearning #IoT #IIoT #PyTorch #Python #TensorFlow #CloudComputing #Algorithms
http://bit.ly/2ASKwqx
❇️ @AI_Python_EN
This awesome story from ETH Zürich #AI #researchers needs to be told! They used #artificialintelligence to improve quality of images recorded by a relatively new biomedical imaging method. This paves the way towards more accurate #diagnosis and cost-effective devices. How awesome is that! Important note on optoacoustic tomography
They used #machinelearning method to improve optoacoustic imaging. This relatively young #medicalimaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. Paper is here:
https://lnkd.in/dtgUq4A
Code: https://lnkd.in/dYy32Vd
#deeplearning
❇️ @AI_Python_en
They used #machinelearning method to improve optoacoustic imaging. This relatively young #medicalimaging technique can be used for applications such as visualizing blood vessels, studying brain activity, characterizing skin lesions and diagnosing breast cancer. Paper is here:
https://lnkd.in/dtgUq4A
Code: https://lnkd.in/dYy32Vd
#deeplearning
❇️ @AI_Python_en
than standard Adam? - I ran 24 experiments to find out. - The answer? Meh, not really. Full tutorial w/ #Python code here:
http://pyimg.co/asash
#DeepLearning #Keras #MachineLearning #ArtificialIntelligence #AI #DataScience
❇️ @AI_Python_en
http://pyimg.co/asash
#DeepLearning #Keras #MachineLearning #ArtificialIntelligence #AI #DataScience
❇️ @AI_Python_en
new #python package #imagededup (Image Deduplication) is available on Github now 🤩
GitHub:
https://lnkd.in/d8bTvf6
Docs:
https://lnkd.in/dDRpNiU
It allows you to find duplicate images (near and exact) with a variety of #hashing methods and #ConvolutionalNeuralNetworks. Anyone who is doing applied Computer Vision knows the pain duplicate images can cause, and even in research datasets this can be an issue, see our Cifar-10 example notebook Example:
https://lnkd.in/ddB97nf
❇️ @AI_Python_en
GitHub:
https://lnkd.in/d8bTvf6
Docs:
https://lnkd.in/dDRpNiU
It allows you to find duplicate images (near and exact) with a variety of #hashing methods and #ConvolutionalNeuralNetworks. Anyone who is doing applied Computer Vision knows the pain duplicate images can cause, and even in research datasets this can be an issue, see our Cifar-10 example notebook Example:
https://lnkd.in/ddB97nf
❇️ @AI_Python_en
Python has many advantages, but speed is not one of them. Most production code in the enterprise is currently powered by JVM and .NET. Python has scikit-learn, xgboost and PyTorch, which makes it the de-facto standard in AI. But it's still too slow. Before Kotlin, JVM didn't have anything as convenient as Python. Now, there's Kotlin: concise, intuitive and fast! Kotlin is already the programming language for Android. Now it's time to make it the programming language for AI. What's needed is a lightweight and scalable JVM library that implements the fit/transform/predict interface of scikit-learn. I believe that it's time to build it and I believe that Kotlin is an ideal language for that. If someone wants to lead this project, come forward and start building this library. I will provide publicity support. Burkov
❇️ @AI_Python_en
❇️ @AI_Python_en
Analysis of consumer surveys frequently consists of inspecting column totals and multiple two-way cross tabs. However, looking at data piecemeal increases the risk of spurious "significant" differences while, at the same time, missing patterns that are both real and important. A better approach is to statistically adjust for influential variables simultaneously. When done professionally, a very different picture may emerge than that suggested by column totals and two-way crosstabs. There is no need to statistically model every question in the survey, only the key ones. The models should be interpretable and guided by knowledge of the subjects the survey is addressing. Multivariate analysis is also an alternative to the standard weighting procedures used in consumer research and political polling.
❇️ @AI_PYTHON_EN
❇️ @AI_PYTHON_EN
The list of accepted papers at the NeurIPSConf
Graph Representation Learning Workshop 2019 is online!
https://grlearning.github.io/papers/
(Camera-ready versions will follow later this month). Submission statistics / acceptance rates below
❇️ @AI_Python_en
Graph Representation Learning Workshop 2019 is online!
https://grlearning.github.io/papers/
(Camera-ready versions will follow later this month). Submission statistics / acceptance rates below
❇️ @AI_Python_en
NeuIPS 2019 Accepted paper visualization and statistics of Keywords/Institutions.
https://github.com/gsudllab/AcceptPaperAnalysis/blob/master/NeuIPS%202019.md
❇️ @AI_Python_EN
https://github.com/gsudllab/AcceptPaperAnalysis/blob/master/NeuIPS%202019.md
❇️ @AI_Python_EN
Code for "Learnable Triangulation of Human Pose" is released:
https://github.com/karfly/learnable-triangulation-pytorch
SOTA in 3D human pose estimation! #ICCV19
❇️ @AI_Python_EN
https://github.com/karfly/learnable-triangulation-pytorch
SOTA in 3D human pose estimation! #ICCV19
❇️ @AI_Python_EN
When Does Self-supervision Improve Few-shot Learning?
https://deepai.org/publication/when-does-self-supervision-improve-few-shot-learning
#Classifier #LossFunction
❇️ @AI_Python_EN
https://deepai.org/publication/when-does-self-supervision-improve-few-shot-learning
#Classifier #LossFunction
❇️ @AI_Python_EN
Great starting point for PyTorch Reinforcement Learning projects and fantastic effort by Heinrich Küttler &
for reproducible RL research! "Why PyTorch?" you might ask.
Announcing TorchBeast, an IMPALA-inspired pytorch
platform for distributed RL research. Used in a growing number of projects here at FacebookAI
Paper:
https://arxiv.org/abs/1910.03552
Code:
https://github.com/facebookresearch/torchbeast
❇️ @AI_Python_EN
for reproducible RL research! "Why PyTorch?" you might ask.
Announcing TorchBeast, an IMPALA-inspired pytorch
platform for distributed RL research. Used in a growing number of projects here at FacebookAI
Paper:
https://arxiv.org/abs/1910.03552
Code:
https://github.com/facebookresearch/torchbeast
❇️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
Generalized Inner Loop Meta Learning, aka Gimli https://arxiv.org/abs/1910.01727 ❇️ @AI_Python_EN
In parallel with this paper, FacebookAI
has released higher, a library for bypassing limitations to taking higher-order gradients over an optimization process.
Library:
https://github.com/facebookresearch/higher
Docs:
https://higher.readthedocs.io
❇️ @AI_Python_EN
has released higher, a library for bypassing limitations to taking higher-order gradients over an optimization process.
Library:
https://github.com/facebookresearch/higher
Docs:
https://higher.readthedocs.io
❇️ @AI_Python_EN
Yoshua Bengio, one of the pioneers of deep learning, now wants to his algorithms to ask 'why' things happen:
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
❇️ @AI_PYTHON_EN
https://www.wired.com/story/ai-pioneer-algorithms-understand-why/
❇️ @AI_PYTHON_EN
Introducing sotabench : a new service with the mission of benchmarking every open source ML model. We run GitHub repos on free GPU servers to capture their results: compare to papers, other models and see speed/accuracy trade-offs. Check it out:
https://sotabench.com
❇️ @AI_Python_EN
https://sotabench.com
❇️ @AI_Python_EN
With 180+ papers mentioning
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN
Transformers and its predecessors, it was high time to put out a real paper that people could cite.
https://arxiv.org/abs/1910.03771
❇️ @AI_Python_EN