IamPython
290 subscribers
148 photos
13 videos
8 files
195 links
This is Python based telegram group for web developers, Artificial intelligence, webscraping, Datascience, Data analysis, Ethical Hacking and more. You will learn lot insights and useful information
Download Telegram
Everyweek we will be connecting for DataScience Dialogue. We discussed AutoML and DataRobot Demo #2 in this week. I have demonstrated the DataRobot which market leading AutoML tool.
Machine learning steps:


Analyze the problem
Gather the data
Prepare the data
Choose the right model
Train the model
Evaluate the results
Look for biases
Tune it
Deploy the model
Monitor it
Retrain it
Fixing overfitting:

Simplify the model (fewer parameters)
Simplify training data (fewer attributes)
Constrain the model (regularization)
Use cross-validation
Use Early stopping
Build an ensemble
Gather more data
Fixing underfitting:

More complex model (more parameters)
Increase number of features
Feature engineer should help
Unconstrain the model (no regularization)
Reduce noise on the data
Train for longer
IamPython pinned «I just want to know what do you want to learn ? hit the poll. Thank you.»
The common convolutional neural networks (CNN).

LeNet-5
AlexNet
VGG-16
Inception-v1
Inception-v3
ResNet-50
Xception
Inception-v4
Inception ResNets
ResNext-50
This is an important configuration file for deployment AI models and DevOps configurations
Amazon Web Services
◦ Amazon DynamoDB - 25GB NoSQL DB
◦ Amazon Lambda - 1 Million requests per month
◦ Amazon SNS - 1 million publishes per month
◦ Amazon Cloudwatch - 10 custom metrics and 10 alarms
◦ Amazon Glacier - 10GB long-term object storage
◦ Amazon SQS - 1 million messaging queue requests
◦ Amazon CodeBuild - 100min of build time per month
◦ Amazon Code Commit - 5 active users per month
◦ Amazon Code Pipeline - 1 active pipeline per month
IBM Cloud
◦ Cloud Functions - 5 million executions per month
◦ Object Storage - 25GB per month
◦ Cloudant database - 1 GB of data storage
◦ Db2 database - 100MB of data storage
◦ API Connect - 50,000 API calls per month
◦ Availability Monitoring - 3 million data points per month
◦ Log Analysis - 500MB of daily log
Oracle Cloud
◦ Compute - 2 VM.Standard.E2.1.Micro 1GB RAM
◦ Block Volume - 2 volumes, 100 GB total (used for compute)
◦ Object Storage - 10 GB
◦ Load balancer - 1 instance with 10 Mbps
◦ Databases - 2 DBs, 20 GB each
◦ Monitoring - 500 million ingestion datapoints, 1 billion retrieval datapoints
◦ Bandwidth - 10TB egress per month
◦ Notifications - 1 million delivery options per month, 1000 emails sent per month
PyTorch 1.8 Release with native AMD support!
Q: Shall we remove the duplicate records (i.e. records with exactly the same features) from the dataset before training an ML model?

A: It depends. If the duplicated records belong to a single instance/event (e.g. when one instance is captured twice), they should be removed. For example, by looking at the customer_IDs, we may notice some of the customers are duplicated in our data. In this case, we should deduplicate. Otherwise, the ML model cannot estimate the prior probability distribution correctly.

On the other hand, if the records with the same features belong to different instances/events, we should keep them. For example, if two customers have the same age, sex, balance, and etc, their data should be used to train the model.

To have a better understanding, consider a Naive Bayes model for a classification problem. By removing the samples with the same features, the model misestimates the prior probabilities that eventually affects the output.

Intuitively, the model needs to know the frequency/distribution of those duplicated records.