AI, Python, Cognitive Neuroscience
3.88K subscribers
1.09K photos
47 videos
78 files
893 links
Download Telegram
Good start on mastering a new skill from scratch.

some of covered topics are :
1) Careers (complete learning paths to become a master)
2) Topics (comprehensive guides about a specific topic)
3) Tools
4) Reasearch

https://github.com/clone95/Virgilio

#machinelearning #AI #getting_started
@AI_Python_EN
Starting weights can matter a lot for training a neural net. Read this deeplearning.ai tutorial on initializing your neural network:
http://bit.ly/2XmzHGu

✴️ @AI_Python_EN
Wow nice there's a new convolution operation for CNNs called "Octave Convolution (OctConv)" which can be used as a direct replacement of plain vanilla convolutions without any adjustments in the network architecture. The idea of OctConv is pretty cool. In images, information is conveyed at different frequencies i.e. high frequencies show fine details whereas low-frequencies show global structures.

The idea then is to factorize the feature maps into a high-frequency/low-frequency feature maps and then reduce the spatial resolutions of the low-frequency maps by an octave. This not only leads to lower memory/computation cost but also to better evaluation results such as accuracy in an image classification task. Can't wait to see this in Keras/TensorFlow! #deeplearning #machinelearning

Paper: https://lnkd.in/dckWSDq

✴️ @AI_Python_EN
Ian Goodfellow: Generative Adversarial Networks (GANs)
This conversation with Ian led me to rethink the way I see several basic ideas in deep learning, including generative models, adversarial learning, and reasoning. I definitely enjoyed it and hope you do as well.

Ian Goodfellow: Generative Adversarial Networks (GANs)

✴️ @AI_Python_EN
image_2019-04-21_20-07-31.png
378.9 KB
17 equations that changed the world
✴️ @AI_Python_EN
#DeepLearning is fun when you have loads of GPUs!

Here's a 256GB , 8 GPU cluster we will soon be testing as well.

#gpu #nvidia #research
#machinelearning
✴️ @AI_Python_EN
Stanford ML Group just released knee injury dataset they're calling MRNET.

Paper: https://lnkd.in/dwik_zz

Dataset: https://lnkd.in/dwS96AD
#ml #knee #injury #stanford #dataset #deeplearning

https://lnkd.in/dDpD38u

✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Neural Painters: A learned differentiable constraint for generating brushstroke paintings

Nice paper combining ideas from world models and style transfer

paper: https://lnkd.in/gpWm3y9

github: https://lnkd.in/gVuExnm


✴️ @AI_Python_EN
Great explanation of permutation test.
Should alpacas be shampooed? ;-)

https://lnkd.in/eXqA7ze

✴️ @AI_Python_EN
What make a company don't want spend on #AI development?

A. Don't have budget
B. Don't understand what is AI
C. Just stingy in general
D. Talent shortage
E. No satisfying consultant/vendor in their area
F. Other, ...

I'm courious about your experience, please choose, if you have multiple reason please start from most relevant one.

✴️ @AI_Python_EN
CS294-158 Deep Unsupervised Learning Spring 2019

About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning.

Instructors: Pieter Abbeel, Peter Chen, Jonathan Ho, Aravind Srinivas

CS294-158: https://lnkd.in/eq6ZKAn

#DeepLearning #GenerativeModels #UnsupervisedLearning

✴️ @AI_Python_EN
Some random thoughts on p-values...

Inferential statistics comes into play when we wish to generalize a result from a sample to the population from which the sample was drawn.

The type of sampling procedure used must be taken into account. This is important since most statistical programs assume simple random sampling.

The quality of the sample and the definition of the population must also be considered. A textbook-quality sample from the wrong population, for example, could seriously mislead us.

Coverage problems and non-response in the case of surveys can be serious problems.

Measurement error and missing data can wreak havoc on the fanciest of analytics.

Distributional assumptions should not be ignored.

The "So What?" test, IMO, is most important. A very large and highly statistically significant correlation may have little significance to decision-makers. Conversely, a tiny correlation might be big news.

After a 100 years, if so many scientists and researchers still can't get their heads around p-values, what are the chances that Bayesian statistics will fare any better?

For much deeper thoughts on this important topic, see "Statistical Inference in the 21st Century: A World Beyond p < 0.05", linked below.



"Total Survey Error: Past, Present, and Future" may also be of interest -
https://academic.oup.com/poq/article/74/5/849/1817502


Inferential statistics are often used inappropriately, IMO. One example would be performing a t-test to assess whether a regression coefficient is really zero in the population...when the regression was performed on the population data. Similarly, significance testing is frequently used in model diagnostics when it might be more sensible to investigate how potential violation of an assumption might be affecting the model.



"Statistical Inference in the 21st Century: A World Beyond p < 0.05" -
https://www.tandfonline.com/toc/utas20/current

✴️ @AI_Python_EN
I love the Machine Learning and NLP articles published by Medium's Towards Data Science (Online Publication). They motivate each article really well, provide just the right amount of mathematical explanation, show really cool visualizations and provide code snippets. Most importantly, they have been written fairly recently (2018-2019) so the results and references they contain are pretty much state-of-the-art today. Here are some of my favourite articles.

RNNs and LSTMs: https://lnkd.in/eWM-ncT

Variational Autoencoders: https://lnkd.in/enp4KQs

Transformers: https://lnkd.in/e2JQbkG

CNNs: https://lnkd.in/esrqMZH

✴️ @AI_Python_EN
Introduction to Text Wrangling Techniques for Natural Language Processing
https://bit.ly/2GzNgg1

Generalized Language Models http://bit.ly/2TRDwSE #AI #DeepLearning #MachineLearning #DataScience

✴️ @AI_Python_EN
How to start learning data science with zero-progragimming experience

1. Start learning data science with zero-progragimming experience https://lnkd.in/fUZKqjg

2. Selecting course on data science https://lnkd.in/fXBw833

3. From Excel to Pandas https://lnkd.in/fnU5apw

4. Communication & Data Storytelling https://lnkd.in/eqf5gUV

5. Data Manipulation with Python https://lnkd.in/g4DFNpJ

6. Data Visualization with Python (Matplotlib/Seaborn): https://lnkd.in/g_3fx_6

7. Advanced Pandas https://lnkd.in/fZWGp9B

8. Tricks on Pandas by Real Python https://lnkd.in/fXc9XSp

9. Becoming Efficient with Pandas https://lnkd.in/f64hU-Y

10. Pandas Advances Tips https://lnkd.in/fGyBc4c

11. Jupyter Notebook (Beginner) https://lnkd.in/fTFinFi

12. Jupyter Notebook (Advanced)
https://lnkd.in/fFufePv
Youtube : https://lnkd.in/ftVzrtk

✴️ @AI_Python_EN
Functional brain network architecture supporting the learning of social networks in humans

Tompson et al.: https://lnkd.in/e4r93sC

#brainnetworks #neuroscience #socialnetworks #neuralnetworks

✴️ @AI_Python_EN
This media is not supported in your browser
VIEW IN TELEGRAM
Generating adversarial patches against YOLOv2. Very cool paper on adversarial attacks in particular on a person detector. Understanding adversarial attacks on machine learning models is an important research field in order to create more robust models. Code is also provided and a really funny demo video:) Check it out! #deeplearning #machinelearning

📜 Paper: https://lnkd.in/daJEPqj
🔤 Code: https://lnkd.in/dPGFhwE

✴️ @AI_Python_EN
Some random thoughts about AI BS...

Humans design, implement and use AI, thus AI cannot eliminate human error.

AI is "almost instantaneous" after it has been designed, tested, implemented and evaluated. Many human tasks and decisions are one-off and for these AI is slow and impractical.

Adaptive surveys have been used for decades in marketing research and other fields. There are things called skips...At a more sophisticated level, they are an outgrowth of adaptive testing, which psychometricians have been investigating for decades. That nut is not entirely cracked.

Text mining is a legitimate application of AI and has been since the 1950s.

I now see programmatic advertising, which has been used for quite some time now, rebranded as AI.

Chatbots are still very much a work in progress and, as such, haven't revolutionized anything. AI cannot read hearts and minds. Sorry.

Eye tracking has been used since the 1920s. Facial imagining is newer but not new. In these contexts, people selling them used to refer to neural nets as neural nets, not as AI.

Automated demand forecasting has been around at least since the mid-70s when AFS launched Autobox. Inventory management and control has been increasingly automated since the 1960s. They were never called #AI.

I'd better stop now...

✴️ @AI_Python_EN