SETOTAW Consultancy Service ((Research (የሪሰርች ;ጥናታዊ ፅሁፍ; የቴሲስ ) and Engineering Projects consultancy))
292 subscribers
780 photos
9 videos
101 files
377 links
#ጥናታዊ ፅሁፍ -ማማከር (Bio & Eco-stat ®SPSS,STATA, R, EVIEWS, Python, Areana, MATLAB...-GIS )
#ለተማሪ
#ለድርጅት
#ለመንግስት
#የቢዝነስ ፕላን -feasibility Study
፨የፕሮጀክት ስራ ፤
#የኮንስትራክሽን ስራ እና ማማከር
የ Software አቅርቦት
የፊልምና የትያትር ስክርፕት ዝግጅት
@ህጋዊ #የሙያ ፍቃድ ያለው!
Call#0970461746
Download Telegram
Propensity Score Matching (PSM) Basics: Procedure from Data Preparation to Analysis
👓👓👓👓👓👓👓👓👓👓👓👓
Propensity score matching (PSM) is a statistical technique used to estimate the causal effect of a treatment or intervention by matching treated and untreated individuals based on their propensity to receive the treatment.
Here's a basic #procedure to guide you through the process of PSM:

1. Data Preparation:
- Gather data that includes information on the treatment(ምሳሌ:-ስልጠና የተሰጣቸው) assignment, relevant covariates, and the outcome of interest.
- Clean the data by removing duplicate or erroneous data points, handling missing values appropriately (e.g., imputation), and dealing with outliers.
🚀🚀🚀🚀🚀🚀🚀🚀
2. Propensity Score Estimation:
- Estimate the propensity score for each individual using a logistic regression model. The propensity score represents the probability of receiving the treatment conditional on the observed covariates.
(STATA ብትጠቀሙ አሪፍ ነው!!)
3. Matching:
- Match treated and untreated individuals based on their propensity scores using matching algorithms such as nearest neighbor matching, caliper matching, or kernel matching.
- Ensure that the matching algorithm preserves the balance of observed covariates between the treated and untreated groups.

4. Covariate Balance Assessment:
- Assess the balance of observed covariates between the matched treated and untreated groups using standardized differences or t-tests.
- If the covariate balance is not satisfactory, consider using additional matching techniques or refining the propensity score model.

5. Outcome Analysis:
- Compare the outcomes between the matched treated and untreated groups using appropriate statistical methods, such as t-tests, regression analysis, or difference-in-differences estimation.
- Control for potential confounding variables in the outcome analysis to ensure that the estimated treatment effect is causal.

6. Sensitivity Analysis:
- Conduct sensitivity analyses to assess the robustness of the PSM results to different matching algorithms, caliper widths, or propensity score models.
- Evaluate the potential bias due to unobserved confounding variables using methods like the Rosenbaum bounds or the Imbens-Rubin sensitivity analysis.

7. Interpretation and Reporting:
- Interpret the estimated treatment effect and its statistical significance.
- Communicate the findings clearly and concisely, highlighting the implications of the PSM analysis for policy or decision-making.

አስታውሱ:- that PSM is a powerful technique for estimating causal effects, but it relies on several assumptions, such as the ignorability of the treatment(ለምሳሌ ስልጠና በተሰጠው ሰራተኛና ባልሰጠው መካከል....) assignment conditional on the observed covariates. It's essential to carefully consider the appropriateness of PSM for the specific research question and context.
see IMAGE below:-
The #multinomial endogenous switching regression (MESR)
🧶🧶🧶🧶🧶🧶🧶🧶🧶🧶🧶
The MESR model is a statistical model that is used to analyze the relationship between two or more categorical variables when there is a potential for endogeneity. Endogeneity occurs when the explanatory variables are correlated with the error term, which can lead to biased and inconsistent estimates.

The MESR model addresses this issue by explicitly modeling the endogeneity of the explanatory variables. This is done by including a set of instrumental variables in the model, which are variables that are correlated with the explanatory variables but not with the error term.

The MESR model is estimated using a two-step procedure. In the first step, the reduced form equations for the explanatory variables are estimated. In the second step, the structural equation for the dependent variable is estimated, using the predicted values of the explanatory variables from the first step as instruments.

The MESR model can be used to analyze a wide variety of problems, including:

* The effect of education on earnings
* The effect of job training on wages
* The effect of health insurance on health care utilization

The MESR model is a powerful tool for analyzing the relationship between 👋categorical variables when there is a potential for 🪡endogeneity. However, it is important to note that the model is only valid if the instrumental variables are truly exogenous.🏄‍♂🏨📈📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉📉🗳🗳🗳🗳🗳🗳🗳🗳🗳🗳🗳📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖📖🧮🧮🧮🧮🧮🧮🧮🧮🧮🧮🧮
Sunil_IFPRI_23Mar21_IV_ESR_PDFFormat.pdf
1.4 MB
The #MESR model
#Machine LEARNING FOR BIG DATA ANALYIS

፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨፨
Machine learning is indeed a powerful tool for analyzing large datasets and making predictions. When dealing with large amounts of data, traditional manual analysis can be time-consuming and impractical. Machine learning algorithms, on the other hand, can process and analyze vast amounts of data more efficiently.

Here's a general workflow for using machine learning for large data analysis and prediction:

1. Data Collection: Gather the relevant data from various sources. This can include structured data (e.g., databases, spreadsheets) or unstructured data (e.g., text documents, images).

2. Data Preprocessing: Clean the data and prepare it for analysis. This step may involve tasks such as removing duplicates, handling missing values, normalizing numerical data, and encoding categorical variables.

3. Feature Engineering: Extract meaningful features from the data that can be used to train machine learning models. This might involve techniques such as dimensionality reduction, transforming variables, or creating new features based on domain knowledge.

4. Model Selection: Choose an appropriate machine learning model based on the nature of the problem you're trying to solve, the type of data you have, and the available computational resources. Popular models for large-scale data analysis include random forests, gradient boosting machines, deep learning neural networks, and support vector machines.

5. Model Training: Split your dataset into a training set and a validation set. Use the training set to train the machine learning model by adjusting its parameters to minimize the prediction error. The validation set is used to evaluate the model's performance and fine-tune hyperparameters.

6. Model Evaluation: Assess the performance of the trained model using appropriate evaluation metrics. Common metrics include accuracy, precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC-ROC).

7. Model Deployment and Prediction: Once you're satisfied with the model's performance, deploy it to make predictions on new, unseen data. This can involve integrating the model into a larger software system or creating an API for real-time predictions.

8. Monitoring and Updating: Continuously monitor the performance of the deployed model and collect feedback from users. Over time, retrain and update the model to incorporate new data and improve its predictions.

It's important to note that large-scale data analysis requires careful consideration of computational resources, such as memory and processing power. Distributed computing frameworks like #Apache Hadoop and Apache Spark are often used to handle big data processing and scale machine learning algorithms to large datasets.

Additionally, #data privacy and security considerations should be taken into account when working with large datasets. Ensuring compliance with relevant data protection regulations and implementing appropriate security measures is crucial.

Overall, machine learning can be a valuable tool for analyzing and #predicting outcomes from large datasets, but it requires expertise in data preprocessing, model selection, and evaluation to achieve accurate and meaningful results.
፨፨፨፨፨፨፨፨፨፨፨
comment from dr Where is your #abstract? The abstract must be a fully italicized single paragraph, single spaced and cover the following sequenced in order:
1. General objective of the study,
2. Methodology employed by the study (such as research design, population, sampling, sample size, data gathering tools, and methods of data analyses),
3. Key findings,
4. Conclusions, and
5. A highlight of recommendations
with 5 to 7 keywords on a separate line at the bottom of the abstract.https://t.me/mamaker/1984
RESEARCH OUT COME ON ROAD SAFTY(የመንገድ ደህንነትን በሚመለከት የተጠኑ ጥናቶች ውጤት
👉በአለማችን ላይ የመንገድ ደህንነት (Road Saftey) ጉዳይ ከፍተኛ ትኩረት ከሚሰጣቸው አጀንዳዎች መካከል ቀዳሚ ነው።

▶️ከዚህ ዘርፍ ጋር ተያይዞ በመቶዎች የሚቆጠሩ ኩባንያዎች የመንገድ ደህንነት አስተማማኝ ለማድረግ ዘርፈ ብዙ ሥራዎች ያከናውናሉ።

🏷በዚህም የመንገደኞችን ጉዞ አስተማማኝ ለማድረግ ችለዋል።

⭐️የሰዎች #ሕይወት እንዳይጠፋ ንብረት እንዳይወድም በጥናት ምርምር ሥራዎቻቸው ለመታደግ ሞክረዋል፤ ስኬትም አግኝተውበታል።

💫በአውስትራሊያ  ታርማክ ላይንሜኪንግ(2024) ከተባባሪ ተቋማት ጋር በመሆን ለመንገድ ደህንነትን አስተዋፅኦ ያላቸውን ጥናት ከደረገ በኅላ የመፍትሄ አካል አድርጎ የወሰደው
#የመንገድ ቀለሞችን አምርቶ ጥቅም ላይ አውሏል።

🚧አዲሱ ቀለም ቀን ከፀሐይ የሰበሰበውን ብርሃን መልሶ በምሽት #በአረንጓድያማ ቀለም አንፀባራቂ ሆኖ የሚታይ ነው።  በተለይ የምሽት ተጓዦች አስተማማኝ ጉዞን ለማድረግ አመቺ ነው ተብሏል።

🚧ብዙውን ጊዜ በምሽት አሽከርካሪዎች ድካም የሚሰማቸው በመሆኑ አዲሱ የመንገድ ላይ ቀለም ንቃትን የሚፈጥርና በተለይ በመንገድ መታጠፊያዎችና ለአስቸጋሪ ጥልፍልፍ መንገዶች ቀለሙ ጠቀሜታው የላቀ ነው።

የተወሰደው ከ@etconp2024