Research Methods in AL
1.28K subscribers
373 photos
6 videos
96 files
142 links
Download Telegram
👍71🔥1👏1
💡How to identify best fit research gap
👍74
7🔥2
https://youtu.be/hlL6LCMuHSM?si=egFLVUJIE_i1mJo0


💡Good and Bad Research Questions
4
Dear All,

I just thought I would let you know that this afternoon at 5pm (Nicosia/Istanbul time) I will be doing a webinar on Using Literature to Learn Language for New York University/Abu Dhabi. If you would like to join, the the link is below:

https://nyu.zoom.us/j/96869390890
Meeting ID: 968 6939 0890

Best wishes, Carol

--
______
(Prof. Dr) Carol Griffiths
6
👍41
👍81
💡Coding Techniques with Examples 1
10👍1🔥1
💡Coding Techniques with Examples 2
11👍2🔥1
2
Chi-Square Tests😊🛑

- Purpose: Tests whether there is a significant association between two categorical variables.

- Key Statistics:

- Pearson Chi-Square: The most common chi-square test for independence. Assumes expected frequencies are ≥5 in most cells.

- Likelihood Ratio: An alternative to Pearson’s chi-square, useful for small samples or sparse tables.

- Linear-by-Linear Association: Tests for a linear trend between ordinal variables.

- Note: If >20% of cells have expected counts <5, consider Fisher’s Exact Test (better for small samples).

This is a quick look..I will move forward with more untouched, technical details in my future posts😊😊
7
💡Five Research Gaps
👍74
I continue with Chi-Square and technical details in different parts of SPSS' menu
======================
2. Correlations (Ordinal/Numeric Variables)

- Purpose: Measures the strength and direction of association between ordinal or continuous variables.

- Key Statistics:
- Pearson’s r: For linear relationships between continuous variables

- Spearman’s rho: For monotonic relationships between ordinal variables.
3👏1
3. Nominal Data Measures (For Categorical Variables)

- Purpose: Assesses association between nominal (unordered) categories.

- Key Statistics:
- Contingency Coefficient: Adjusts chi-square for table size (range: 0 to <1).
- Phi & Cramer’s V:
- Phi: For 2×2 tables (range: 0–1).
- Cramer’s V: For larger tables (range: 0–1; values >0.3 often indicate meaningful association).
- Lambda: Predicts one variable based on another (asymmetric/symmetric versions).
- Uncertainty Coefficient: Measures reduction in uncertainty when predicting one variable using another.
4
Qual vs. Quan
10🔥2
8🔥2👍1
📚 PREET Center presents:

🔴 A Practical Course on Quantitative Research Designs with SPSS (Module 1)

🔹 Instructor: Dr. Hessameddin Ghanber
🔹 Duration: Six weeks (six sessions)
🔹 Schedule: Tuesdays: 16:30-18:00
🔹 The workshops will be held on the Google Meet platform and will have the support.

📣  Registration has started. (Limited capacity)

▪️ For more information and registration:
Telegram: @preetadmin
WhatsApp & SMS: +989359289774

▪️For each participant in the workshop, a digital certificate signed by Dr. Minoo Alemi, Dr. Zia Tajeddin, and Dr. Hessameddin Ghanbar will be issued.

▪️Join our Telegram Channel: @preetcenter
6👍1
Module 1
🔵 Part I: Data Structure Issues and Design Technicalities
Objective: Equip students with foundational knowledge to prepare and structure data for valid and reliable mean-difference analysis in SPSS.

1. Introduction to Mean-Difference Techniques
▪️Overview of t-tests and ANOVA
▪️When and why to use mean-comparison methods

2. Research Design Essentials
▪️Independent vs. repeated measures designs
▪️Between-group vs. within-subject variables
▪️Randomization, control, and matching

3. Variables and Measurement
▪️Levels of measurement (nominal, ordinal, interval, ratio)
▪️Dependent and independent variable specification

4. Data Structure in SPSS
▪️Setting up the variable view and data view
▪️Labeling variables and values correctly
▪️Coding group membership (dummy coding, grouping variables)

5. Data Cleaning and Assumption Checking
▪️Handling missing data
▪️Testing for normality and outliers
▪️Homogeneity of variance and sphericity

Part II: Application of Mean-Difference Techniques in SPSS
Objective: Develop hands-on skills to conduct, interpret, and report t-tests and ANOVAs using SPSS.

6. Independent Samples t-test
▪️Use cases, assumptions
▪️Running the test in SPSS
▪️Interpreting SPSS output

7. Paired Samples t-test
▪️Use with repeated measures
▪️Applications and SPSS procedure
▪️Reporting effect sizes

8. One-Way ANOVA
▪️Assumptions and post-hoc tests
▪️Conducting and interpreting ANOVA in SPSS

9. Repeated Measures ANOVA
▪️Structure and assumptions
▪️Sphericity and corrections (Greenhouse-Geisser etc.)

10. Factorial ANOVA (Two-way ANOVA)
▪️Interaction effects
▪️Main vs. interaction effects interpretation
▪️Visualizing interaction plots

11. Reporting Results
▪️APA-style reporting of SPSS outputs
▪️Tables, figures, and interpretation

12. Common Mistakes
6
🛑How can I boost KMO value in EFA of questionnaires? I will give you eight strategies here:

Variables= Items

1. Remove Variables with Low Individual KMO Values

Use the Anti-Image
Correlation Matrix to identify variables with individual KMO values below 0.5.

2. Examine the Correlation Matrix

KMO depends on high partial correlations being low and zero-order correlations being high.If many correlations are low (e.g., below 0.3), variables may not be suitable for EFA. So, remove weakly correlated or uncorrelated variables.

3. Combine or Collapse Similar Variables

If two or more items measure the same thing and are highly correlated (r > 0.9), consider combining them or removing redundancies.This can reduce multicollinearity, which negatively affects KMO.

4. Increase Sample Size

A larger sample size improves the stability of correlation estimates, which may help raise KMO. A general rule: at least 5–10 participants per variable, but more is better. (I see that many researchers do not pay attention to this simple rule😊)

5. Conduct Item Analysis or Reliability Tests

Use Cronbach's alpha or Corrected Item-Total Correlation to detect poorly performing items.Removing items with low item-total correlation may improve KMO

6. Use Principal Axis Factoring Instead of Principal Component Analysis (but I always go for PAF😊)

While both are used in EFA, Principal Axis Factoring (PAF) works better with lower communalities and might provide better estimates for factorability.

7. Consider Transforming Variables

If items are highly skewed or non-normally distributed, transformation (e.g., log or square root) might reduce noise and improve correlations, potentially improving KMO.

8. Theoretically Reconsider Item Set

Review whether all items conceptually belong together and reflect potentially common underlying factors.

Remove or revise items that are theoretically inconsistent.

🛑I say that from the word go, you need to determine whether your factors are formative or reflective, as many researchers easily overlook this super important distinction 😊
👍21
I here explain different types of contrasts in repeated measures ANOVA. These are available in SPSS and R.


1. Polynomial Contrasts

Purpose: Test for linear, quadratic, cubic, etc., trends in the means across the repeated measures.

Use Case: When the levels of the within-subject factor are ordered (e.g., time points or doses).

Example: If you measure performance at 3 time points (T1, T2, T3), a linear contrast tests if the mean increases or decreases steadily, while a quadratic contrast tests for a curve (e.g., increase then decrease).
×××××××××××××××××××××××××
2. Deviation Contrasts

Purpose: Compare each level of the within-subject factor to the overall mean of all levels.

Use Case: When you want to know whether a particular condition differs significantly from the average of all conditions.

Example: Compare each teaching method (A, B, C) to the average effect across all three methods.
×××××××××××××××××××××××××
3. Simple Contrasts

Purpose: Compare each level of the within-subject factor to a reference level (usually the first or last).

Use Case: When you have a control or baseline condition and want to compare each other condition to it.

Example: Compare scores at Week 2 and Week 3 to Week 1 (baseline).
××××××××××××××××××××××××××
4. Repeated Contrasts

Purpose: Compare each level of the within-subject factor to the previous level.

Use Case: To test sequential changes (e.g., from one time point to the next).

Example: Compare performance at Week 2 to Week 1, then Week 3 to Week 2.
××××××××××××××××××××××××××
5. Helmert Contrasts

Purpose: Compare each level of the factor to the mean of subsequent levels.

Use Case: To examine whether early levels differ from the average of what comes next.

Example: Compare Week 1 to the average of Week 2 and Week 3; then compare Week 2 to Week 3.
×××××××××××××××××××××××××
6. Difference Contrasts

Purpose: Compare each level to the mean of preceding levels.

Use Case: The reverse of Helmert; used when later levels are to be compared to previous ones.

Example: Compare Week 3 to the average of Week 1 and Week 2.
××××××××××××××××××××××××××
Each should be based on your research questions and your research goal, as I summarised below:

For trends across time → Polynomial or Repeated

To compare to a control → Simple

To compare sequentially → Repeated

To compare each to average → Deviation

To test theoretical contrasts → Use Custom Contrasts (manually specify contrast weights)
👍75🔥1