Machine Learning And AI
1.65K subscribers
198 photos
1 video
19 files
351 links
Hi All and Welcome Join our channel for Jobs,latest Programming Blogs, machine learning blogs.
In case any doubt regarding ML/Data Science please reach out to me @ved1104 subscribe my channel
https://youtube.com/@geekycodesin?si=JzJo3WS5E_VFmD1k
Download Telegram
Amazon Data Science Interview Question:
In a linear regression model, what are the key assumptions that need to be satisfied for the model to be valid? How would you evaluate whether these assumptions hold in your dataset?

This is also, the most common question I see across companies!

So the assumptions are -

𝗟𝗶𝗻𝗲𝗮𝗿𝗶𝘁𝘆
The relationship between the independent variables (predictors) and the dependent variable is linear. This means that the effect of each predictor on the outcome is constant and additive.
How to evaluate? - Scatter plots of predictors vs. the dependent variable and residual vs. fitted value plots. You can also use polynomial regression or transformations (log, square root) if non-linearity is detected.
How to fix? - Apply feature transformations (e.g., log, square root, polynomial) or use non-linear models.

𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘁𝘆 𝗼𝗳 𝗘𝗿𝗿𝗼𝗿𝘀
The residuals are normally distributed, especially for the purpose of conducting statistical tests and constructing confidence intervals.
How to evaluate - Residual autocorrelation plots or the Durbin-Watson test for time-series data. For non-time-series data, this assumption can often be assumed to be satisfied if the data is randomly sampled.
How to fix - Transform the dependent variable (log, box-cox) and/or check for outliers.

𝗛𝗼𝗺𝗼𝘀𝗰𝗲𝗱𝗮𝘀𝘁𝗶𝗰𝗶𝘁𝘆 (𝗖𝗼𝗻𝘀𝘁𝗮𝗻𝘁 𝗩𝗮𝗿𝗶𝗮𝗻𝗰𝗲 𝗼𝗳 𝗘𝗿𝗿𝗼𝗿𝘀)
The variance of the residuals (errors) is constant across all levels of the independent variables. In other words, the spread of residuals should not increase or decrease as the predicted values increase.
How to evaluate - Plot the residuals against fitted values. If the plot shows a "fan" shape (i.e., increasing or decreasing spread of residuals), you may need to address heteroscedasticity using robust standard errors or a transformation (e.g., log-transformation).
How to fix - Transformation of dependent variable (log, box-cox) or weighted least squares regression can help

𝗡𝗼 𝗠𝘂𝗹𝘁𝗶𝗰𝗼𝗹𝗹𝗶𝗻𝗲𝗮𝗿𝗶𝘁𝘆
The independent variables (predictors) are not highly correlated with each other. High correlation between predictors can lead to multicollinearity, which makes it difficult to determine the individual effect of each predictor on the dependent variable.
How to evaluate - Calculate the Variance Inflation Factor (VIF) for each predictor. If VIF is high, consider removing highly correlated predictors or combining them into a single predictor (e.g., using Principal Component Analysis).
How to fix - Remove or combine correlated predictors, or use regularized regression models like Ridge or Lasso regression.
𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 𝗜𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻:
How does outliers impact kNN?

Outliers can significantly impact the performance of kNN, leading to inaccurate predictions due to the model's reliance on proximity for decision-making. Here’s a breakdown of how outliers influence kNN:

𝗛𝗶𝗴𝗵 𝗩𝗮𝗿𝗶𝗮𝗻𝗰𝗲
The presence of outliers can increase the model's variance, as predictions near outliers may fluctuate unpredictably depending on which neighbors are included. This makes the model less reliable for regression tasks with scattered or sparse data.

𝗗𝗶𝘀𝘁𝗮𝗻𝗰𝗲 𝗠𝗲𝘁𝗿𝗶𝗰 𝗦𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗶𝘁𝘆
kNN relies on distance metrics, which can be significantly affected by outliers. In high-dimensional spaces, outliers can increase the range of distances, making it harder for the algorithm to distinguish between nearby points and those farther away. This issue can lead to an overall reduction in accuracy as the model’s ability to effectively measure "closeness" degrades.

𝗥𝗲𝗱𝘂𝗰𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗶𝗻 𝗖𝗹𝗮𝘀𝘀𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻/𝗥𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗧𝗮𝘀𝗸𝘀
Outliers near class boundaries can pull the decision boundary toward them, potentially misclassifying nearby points that should belong to a different class. This is particularly problematic if k is small, as individual points (like outliers) have a greater influence. The same happens in regression tasks as well.

𝗙𝗲𝗮𝘁𝘂𝗿𝗲 𝗜𝗻𝗳𝗹𝘂𝗲𝗻𝗰𝗲 𝗗𝗶𝘀𝗽𝗿𝗼𝗽𝗼𝗿𝘁𝗶𝗼𝗻
If certain features contain outliers, they can dominate the distance calculations and overshadow the impact of other features. For example, an outlier in a high-magnitude feature may cause distances to be determined largely by that feature, affecting the quality of the neighbor selection.
Company Name : Amazon
Role : Cloud Support Associate
Batch : 2024/2023 passouts

Link : https://www.amazon.jobs/en/jobs/2676989/cloud-support-associate
Company Name : Swiggy
Role : Associate Software Engineer
Batch : 2024/2023/2022 passouts

Link : https://docs.google.com/forms/d/1E029cjZV8Em6zPC0YJYAMDDP_NjPtDkwufqHfvkVG2E/viewform?edit_requested=true&pli=1