AI, Python, Cognitive Neuroscience
3.86K subscribers
1.09K photos
47 videos
78 files
892 links
Download Telegram
As a #datascience professional, you are bound to come across applications and problems to be solved through #LinearProgramming. Better get started today with these two awesome tutorials:

Introductory guide on Linear Programming for (aspiring) #datascientists - https://lnkd.in/fWcqKMn

A Beginner’s guide to Shelf Space Optimization using Linear Programming - https://lnkd.in/f8swcdR
✴️ @AI_Python_EN
Comprehensive Collection of #DataScience and #MachineLearning Resources for #DataScientists includes “Great Articles on Natural Language Processing” +much more 👉https://bit.ly/2nvMXIx #abdsc #BigData #AI #DeepLearning #Databases #Coding #Python #Rstats #NeuralNetworks #NLProc

✴️ @AI_Python_EN
A collection of research papers on decision trees, classification trees, and regression trees with implementations:
https://github.com/benedekrozemberczki/awesome-decision-tree-papers
#BigData #MachineLearning #AI #DataScience #Algorithms #NLProc #Coding #DataScientists

✴️ @AI_Python_EN
Convolutional #NeuralNetworks (CNN) for Image Classification — a step by step illustrated tutorial: https://dy.si/hMqCH
BigData #AI #MachineLearning #ComputerVision #DataScientists #DataScience #DeepLearning #Algorithms

✴️ @AI_Python_EN
Model interpretation and feature importance is a key for #datascientists to learn when running #machinelearing models. Here is a snippet from the #Genomics perspective.
a) Feature importance scores highlight parts of the input most predictive for the output. For DNA sequence-based models, these can be visualized as a sequence logo of the input sequence, with letter heights proportional to the feature importance score, which may also be negative (as visualized by letters facing upside down).
b ) Perturbation-based approaches perturb each input feature (left) and record the change in model prediction (centre) in the feature importance matrix (right). For DNA sequences, the perturbations correspond to single base substitutions.
c) Backpropagation- based approaches compute the feature importance scores using gradients or augmented gradients such as DeepLIFT (Deep Learning Important FeaTures)* for the input features with respect to model prediction.
Link to this lovely paper:
https://lnkd.in/dfmvP9c

❇️ @AI_Python_EN