Stack and Concatenate Numpy Arrays in Python
https://www.pythonforbeginners.com/basics/stack-and-concatenate-numpy-arrays-in-python
@raspberry_python
https://www.pythonforbeginners.com/basics/stack-and-concatenate-numpy-arrays-in-python
@raspberry_python
Forwarded from Python4Finance
آشنایی با توزیع نرمال @Python4finance.pdf
615.1 KB
آشنایی با توزیع نرمال، و روش های نرمال سازی و استانداردسازی
در این اسلاید که مربوط به دوره آمار و احتمال علم داده است، مفهوم توزیع نرمال بررسی شده و با استفاده از پایتون، آزمون های نرمالیتی و روش های استانداردسازی و نرمال سازی بررسی می شود. مطالعه این اسلاید، به علاقه مندان آمار و نیز علاقه مندان یادگیری ماشین توصیه می شود.
#اسلاید
#آموزش
#توزیع_نرمال
#استاندارد_سازی
#نرمال_سازی
✅ پایتون برای مالی در تلگرام
https://t.me/python4finance
✅ پایتون برای مالی در بله
https://ble.ir/python4finance
در این اسلاید که مربوط به دوره آمار و احتمال علم داده است، مفهوم توزیع نرمال بررسی شده و با استفاده از پایتون، آزمون های نرمالیتی و روش های استانداردسازی و نرمال سازی بررسی می شود. مطالعه این اسلاید، به علاقه مندان آمار و نیز علاقه مندان یادگیری ماشین توصیه می شود.
#اسلاید
#آموزش
#توزیع_نرمال
#استاندارد_سازی
#نرمال_سازی
✅ پایتون برای مالی در تلگرام
https://t.me/python4finance
✅ پایتون برای مالی در بله
https://ble.ir/python4finance
Algorithm
Linear regression
Description
Finds a way to correlate each feature to the output to help predict future values.
Type
Regression
@raspberry_python
Linear regression
Description
Finds a way to correlate each feature to the output to help predict future values.
Type
Regression
@raspberry_python
Algorithm
Logistic regression
Description
Extension of linear regression that’s used for classification tasks. The output variable 3is binary (e.g., only black or white) rather than continuous (e.g., an infinite list of potential colors)
Type
Classification
@raspberry_python
Logistic regression
Description
Extension of linear regression that’s used for classification tasks. The output variable 3is binary (e.g., only black or white) rather than continuous (e.g., an infinite list of potential colors)
Type
Classification
@raspberry_python
Algorithm
Decision tree
Description
Highly interpretable classification or regression model that splits data-feature values into branches at decision nodes (e.g., if a feature is a color, each possible color becomes a new branch) until a final decision output is made
Type
Regression
Classification
@raspberry_python
Decision tree
Description
Highly interpretable classification or regression model that splits data-feature values into branches at decision nodes (e.g., if a feature is a color, each possible color becomes a new branch) until a final decision output is made
Type
Regression
Classification
@raspberry_python
Algorithm
Naive Bayes
Description
The Bayesian method is a classification method that makes use of the Bayesian theorem. The theorem updates the prior knowledge of an event with the independent probability of each feature that can affect the event.
Type
Regression
Classification
@raspberry_python
Naive Bayes
Description
The Bayesian method is a classification method that makes use of the Bayesian theorem. The theorem updates the prior knowledge of an event with the independent probability of each feature that can affect the event.
Type
Regression
Classification
@raspberry_python
Algorithm
Support vector machine
Description
Support Vector Machine, or SVM, is typically used for the classification task.
SVM algorithm finds a hyperplane that optimally divided the classes. It is best used with a non-linear solver.
Type
Regression (not very common)
Classification
@raspberry_python
Support vector machine
Description
Support Vector Machine, or SVM, is typically used for the classification task.
SVM algorithm finds a hyperplane that optimally divided the classes. It is best used with a non-linear solver.
Type
Regression (not very common)
Classification
@raspberry_python
Algorithm
Random forest
Description
The algorithm is built upon a decision tree to improve the accuracy drastically. Random forest generates many times simple decision trees and uses the ‘majority vote’ method to decide on which label to return. For the classification task, the final prediction will be the one with the most vote; while for the regression task, the average prediction of all the trees is the final prediction.
Type
Regression
Classification
@raspberry_python
Random forest
Description
The algorithm is built upon a decision tree to improve the accuracy drastically. Random forest generates many times simple decision trees and uses the ‘majority vote’ method to decide on which label to return. For the classification task, the final prediction will be the one with the most vote; while for the regression task, the average prediction of all the trees is the final prediction.
Type
Regression
Classification
@raspberry_python
Algorithm
AdaBoost
Description
Classification or regression technique that uses a multitude of models to come up with a decision but weighs them based on their accuracy in predicting the outcome
Type
Regression
Classification
@raspberry_python
AdaBoost
Description
Classification or regression technique that uses a multitude of models to come up with a decision but weighs them based on their accuracy in predicting the outcome
Type
Regression
Classification
@raspberry_python
Algorithm
Gradient-boosting trees
Description
Gradient-boosting trees is a state-of-the-art classification/regression technique. It is focusing on the error committed by the previous trees and tries to correct it.
Type
Regression
Classification
@raspberry_python
Gradient-boosting trees
Description
Gradient-boosting trees is a state-of-the-art classification/regression technique. It is focusing on the error committed by the previous trees and tries to correct it.
Type
Regression
Classification
@raspberry_python
Unsupervised learning
In unsupervised learning, an algorithm explores input data without being given an explicit output variable (e.g., explores customer demographic data to identify patterns)
You can use it when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you
@raspberry_python
In unsupervised learning, an algorithm explores input data without being given an explicit output variable (e.g., explores customer demographic data to identify patterns)
You can use it when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you
@raspberry_python
Algorithm Name
K-means clustering
Description
Puts data into some groups (k) that each contains data with similar characteristics (as determined by the model, not in advance by humans)
Type
Clustering
@raspberry_python
K-means clustering
Description
Puts data into some groups (k) that each contains data with similar characteristics (as determined by the model, not in advance by humans)
Type
Clustering
@raspberry_python
Algorithm Name
Gaussian mixture model
Description
A generalization of k-means clustering that provides more flexibility in the size and shape of groups (clusters)
Type
Clustering
@raspberry_python
Gaussian mixture model
Description
A generalization of k-means clustering that provides more flexibility in the size and shape of groups (clusters)
Type
Clustering
@raspberry_python
Algorithm Name
Hierarchical clustering
Description
Splits clusters along a hierarchical tree to form a classification system.
Can be used for Cluster loyalty-card customer
Type
Clustering
@raspberry_python
Hierarchical clustering
Description
Splits clusters along a hierarchical tree to form a classification system.
Can be used for Cluster loyalty-card customer
Type
Clustering
@raspberry_python
Algorithm Name
Recommender system
Description
Help to define the relevant data for making a recommendation.
Type
Clustering
@raspberry_python
Recommender system
Description
Help to define the relevant data for making a recommendation.
Type
Clustering
@raspberry_python
Algorithm Name
PCA/T-SNE
Description
Mostly used to decrease the dimensionality of the data. The algorithms reduce the number of features to 3 or 4 vectors with the highest variances
Type
Dimension Reduction
@raspberry_python
PCA/T-SNE
Description
Mostly used to decrease the dimensionality of the data. The algorithms reduce the number of features to 3 or 4 vectors with the highest variances
Type
Dimension Reduction
@raspberry_python