Machine Learning Models
☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
nbdev: use Jupyter Notebooks for everything
https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
Jupyter on Steroids: Create Packages, Tests, and Rich Documents https://t.co/w3K6D0Cgp6
Hackernoon
#Jupyter on Steroids: Create Packages, Tests, and Rich Documents | Hacker Noon
"I really do think [nbdev] is a huge step forward for programming environments": Chris Lattner, inventor of Swift, LLVM, and Swift Playgrounds.
Identifying Hate Speech with BERT and CNN
https://link.medium.com/7FaReCD781
https://link.medium.com/7FaReCD781
Medium
Identifying Hate Speech with BERT and CNN
A tool that can help us to recognize online abuse and harassment by analyzing text
💡 What's the difference between bagging and boosting?
Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN
Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN
Pre-Debate Material
“Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next”
The Turing Award winner wants AI systems that can reason, plan, and imagine
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next
❇️ @AI_Python_EN
“Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next”
The Turing Award winner wants AI systems that can reason, plan, and imagine
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next
❇️ @AI_Python_EN
Machine Learning in a company is 10% Data Science & 90% other challenges It's VERY hard. Everything in this guide is ON POINT, and it's stuff you won't learn in an ML book "Best Practices of ML Engineering" This is a lifesaver.
project:
http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
project:
http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
Very interesting use of #AI to tackle bias in the written text by substituting words automatically to more neutral wording. However, one must also consider the challenges and ramifications such technology could mean to the written language as it can not only accidentally change the meaning of what was written, it can also change the tone and expression of the author and neutralize the point-of-view and remove emotion from language.
#NLP
https://arxiv.org/pdf/1911.09709.pdf
❇️ @AI_Python_EN
#NLP
https://arxiv.org/pdf/1911.09709.pdf
❇️ @AI_Python_EN
Named Entity Recognition Benchmark: spaCy, Flair, m-BERT and camemBERT on anonymizing French commercial legal cases
http://bit.ly/2rq1I5H
#DataScience #MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
http://bit.ly/2rq1I5H
#DataScience #MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
Medium
NER algo benchmark: spaCy, Flair, m-BERT and camemBERT on anonymizing French commercial legal cases
Does (model) size matters?
"If the future can be different from the past and you don't have deep understanding, you should not rely on AI." - a rule from Ray Dalio for when to leverage machine learning for decision-making.
Full conversation:
❇️ @AI_Python_EN
Full conversation:
❇️ @AI_Python_EN
YouTube
Ray Dalio: Principles, the Economic Machine, AI & the Arc of Life | Lex Fridman Podcast #54
Evolutionary Powell's method is a discrete optimization algorithm I've found useful for hyperparameter tuning.
It makes weaker assumptions than Bayesian methods (and so is more robust), but stronger than random exploration (and so has better performance). It fills in the gap between then a bit.
Here's the full post on how Evolutionary Powell's method works:
We develop it as part of End-to-End Machine Learning Course 314:
The open source Ponderosa optimization package where it lives:
The line-by-line code walkthrough:
❇️ @AI_Python_EN
It makes weaker assumptions than Bayesian methods (and so is more robust), but stronger than random exploration (and so has better performance). It fills in the gap between then a bit.
Here's the full post on how Evolutionary Powell's method works:
We develop it as part of End-to-End Machine Learning Course 314:
The open source Ponderosa optimization package where it lives:
The line-by-line code walkthrough:
❇️ @AI_Python_EN
Deep Speech, a good #Persian podcsts about #AI
We will talk about #ArtificialIntelligence, #MachineLearning and #DeepLearning news.
https://castbox.fm/channel/Deep-Speech-id2420707?country=us
❇️ @AI_Python_EN
We will talk about #ArtificialIntelligence, #MachineLearning and #DeepLearning news.
https://castbox.fm/channel/Deep-Speech-id2420707?country=us
❇️ @AI_Python_EN
Castbox
Deep Speech | Listen Free on Castbox.
We will talk about artificial intelligence, machine learning and deep learning news.Millions of podcasts for all topics. Listen to the best free podcast...
Machine Learning Tutorial Suite - 90+ Free Tutorials
https://data-flair.training/blogs/machine-learning-tutorials-home/
https://data-flair.training/blogs/machine-learning-tutorials-home/
DataFlair
Machine Learning Tutorial – Learn Machine Learning using Python - DataFlair
Machine learning tutorial library - Package of 170+ free machine learning tutorials with lots of practicals, projects, case studies
NASA: Neural Articulated Shape Approximation.
Timothy Jeruzalski, Boyang Deng, Mohammad Norouzi, JP Lewis, Geoffrey Hinton, and Andrea Tagliasacchi
arxiv.org/abs/1912.03207
Timothy Jeruzalski, Boyang Deng, Mohammad Norouzi, JP Lewis, Geoffrey Hinton, and Andrea Tagliasacchi
arxiv.org/abs/1912.03207
arXiv.org
NASA: Neural Articulated Shape Approximation
Efficient representation of articulated objects such as human bodies is an important problem in computer vision and graphics. To efficiently simulate deformation, existing approaches represent 3D...
What is My Data Worth? – The Berkeley Artificial Intelligence
https://bair.berkeley.edu/blog/2019/12/16/data-worth/
https://bair.berkeley.edu/blog/2019/12/16/data-worth/
The Berkeley Artificial Intelligence Research Blog
What is My Data Worth?
The BAIR Blog
The Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Deviance Information Criterion (DIC) are perhaps the most widely-used information criteria (IC) in model building and selection. A fourth, Minimum Description Length (MDL), is closely related to the BIC. In a nutshell, they provide guidance as which alternative model provides the most "bang for buck," i.e., the best fit after penalizing for model complexity. Penalizing for complexity is important since, given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice. In line with Occam's razor, complex models sometimes perform poorly on data not used in the model building. There are several others, including AIC3, SABIC, and CAIC, and no clear consensus among authorities as far as I am aware as to which is "best" overall. IC will not necessarily agree on which model should be chosen. Cross-validation, Predicted Residual Error Sum of Squares (PRESS) statistic, a kind of cross-validation, and Mallows’ Cp are also used instead of IC. Information criteria are covered in varying levels in detail in most statistics textbooks and are the subject of numerous academic papers. I know of no single go-to source on this topic.
❇️ @AI_Python_EN
❇️ @AI_Python_EN