Amazon Web Services
◦ Amazon DynamoDB - 25GB NoSQL DB
◦ Amazon Lambda - 1 Million requests per month
◦ Amazon SNS - 1 million publishes per month
◦ Amazon Cloudwatch - 10 custom metrics and 10 alarms
◦ Amazon Glacier - 10GB long-term object storage
◦ Amazon SQS - 1 million messaging queue requests
◦ Amazon CodeBuild - 100min of build time per month
◦ Amazon Code Commit - 5 active users per month
◦ Amazon Code Pipeline - 1 active pipeline per month
◦ Amazon DynamoDB - 25GB NoSQL DB
◦ Amazon Lambda - 1 Million requests per month
◦ Amazon SNS - 1 million publishes per month
◦ Amazon Cloudwatch - 10 custom metrics and 10 alarms
◦ Amazon Glacier - 10GB long-term object storage
◦ Amazon SQS - 1 million messaging queue requests
◦ Amazon CodeBuild - 100min of build time per month
◦ Amazon Code Commit - 5 active users per month
◦ Amazon Code Pipeline - 1 active pipeline per month
IBM Cloud
◦ Cloud Functions - 5 million executions per month
◦ Object Storage - 25GB per month
◦ Cloudant database - 1 GB of data storage
◦ Db2 database - 100MB of data storage
◦ API Connect - 50,000 API calls per month
◦ Availability Monitoring - 3 million data points per month
◦ Log Analysis - 500MB of daily log
◦ Cloud Functions - 5 million executions per month
◦ Object Storage - 25GB per month
◦ Cloudant database - 1 GB of data storage
◦ Db2 database - 100MB of data storage
◦ API Connect - 50,000 API calls per month
◦ Availability Monitoring - 3 million data points per month
◦ Log Analysis - 500MB of daily log
Oracle Cloud
◦ Compute - 2 VM.Standard.E2.1.Micro 1GB RAM
◦ Block Volume - 2 volumes, 100 GB total (used for compute)
◦ Object Storage - 10 GB
◦ Load balancer - 1 instance with 10 Mbps
◦ Databases - 2 DBs, 20 GB each
◦ Monitoring - 500 million ingestion datapoints, 1 billion retrieval datapoints
◦ Bandwidth - 10TB egress per month
◦ Notifications - 1 million delivery options per month, 1000 emails sent per month
◦ Compute - 2 VM.Standard.E2.1.Micro 1GB RAM
◦ Block Volume - 2 volumes, 100 GB total (used for compute)
◦ Object Storage - 10 GB
◦ Load balancer - 1 instance with 10 Mbps
◦ Databases - 2 DBs, 20 GB each
◦ Monitoring - 500 million ingestion datapoints, 1 billion retrieval datapoints
◦ Bandwidth - 10TB egress per month
◦ Notifications - 1 million delivery options per month, 1000 emails sent per month
- 611 datasets you can download in one line of python
- 467 languages covered, 99 with at least 10 datasets
- efficient pre-processing to free you from memory constraints
https://github.com/huggingface/datasets
- 467 languages covered, 99 with at least 10 datasets
- efficient pre-processing to free you from memory constraints
https://github.com/huggingface/datasets
GitHub
GitHub - huggingface/datasets: 🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data…
🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools - huggingface/datasets
Q: Shall we remove the duplicate records (i.e. records with exactly the same features) from the dataset before training an ML model?
A: It depends. If the duplicated records belong to a single instance/event (e.g. when one instance is captured twice), they should be removed. For example, by looking at the customer_IDs, we may notice some of the customers are duplicated in our data. In this case, we should deduplicate. Otherwise, the ML model cannot estimate the prior probability distribution correctly.
On the other hand, if the records with the same features belong to different instances/events, we should keep them. For example, if two customers have the same age, sex, balance, and etc, their data should be used to train the model.
To have a better understanding, consider a Naive Bayes model for a classification problem. By removing the samples with the same features, the model misestimates the prior probabilities that eventually affects the output.
Intuitively, the model needs to know the frequency/distribution of those duplicated records.
A: It depends. If the duplicated records belong to a single instance/event (e.g. when one instance is captured twice), they should be removed. For example, by looking at the customer_IDs, we may notice some of the customers are duplicated in our data. In this case, we should deduplicate. Otherwise, the ML model cannot estimate the prior probability distribution correctly.
On the other hand, if the records with the same features belong to different instances/events, we should keep them. For example, if two customers have the same age, sex, balance, and etc, their data should be used to train the model.
To have a better understanding, consider a Naive Bayes model for a classification problem. By removing the samples with the same features, the model misestimates the prior probabilities that eventually affects the output.
Intuitively, the model needs to know the frequency/distribution of those duplicated records.
Automated workplace #ergonomics assessment using motion capture to remove risk factors that lead to musculoskeletal injuries (#MSD) and to help human performance and #productivity. Ergo simulation software by Nawo Solution & Pierre FOUBERT Wilo Group
In the last 10 years, AI-related PhDs have gone from 14.2% of the total of CS PhDs granted in the U.S. to around 23% as of 2019, according to the CRA survey. At the same time, other previously popular CS PhDs have declined in popularity, including networking, software engineering, and programming
GenoML: Automated Machine Learning for Genomics
pdf: arxiv.org/pdf/2103.03221…
abs: arxiv.org/abs/2103.03221
project page: genoml.com
pdf: arxiv.org/pdf/2103.03221…
abs: arxiv.org/abs/2103.03221
project page: genoml.com
How to Automate Exploratory Data Analysis (EDA) ? - Part 1 https://youtu.be/tMquUTJ6yXU
You should know when you want to expedite data analysis 🧐 I strongly recommend you to use in your real world problems. This module will help you a lot
You should know when you want to expedite data analysis 🧐 I strongly recommend you to use in your real world problems. This module will help you a lot
YouTube
Automate Exploratory Data Analysis (EDA) #Part 1
EDA is performed to visualize what data is telling us before implementing any formal modelling or creating a hypothesis testing model. There are some analysi...
This website will help you learn probability and statistics, the most important topics in math for machine learning!
seeing-theory.brown.edu
Don’t forget to add in bookmarks 🔖
seeing-theory.brown.edu
Don’t forget to add in bookmarks 🔖