๐—–๐—ฆ ๐—”๐—น๐—ด๐—ผ ๐Ÿ’ป ๐ŸŒ ใ€Ž๐—–๐—ผ๐—บ๐—ฝ๐—ฒ๐˜๐—ถ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ฃ๐—ฟ๐—ผ๐—ด๐—ฟ๐—ฎ๐—บ๐—บ๐—ถ๐—ป๐—ดใ€
9.6K subscribers
5.59K photos
3 videos
95 files
10.1K links
๐ŸšฉMain Group - @SuperExams
๐Ÿ“Job Updates - @FresherEarth

๐Ÿ”ฐAuthentic Coding Solutions(with Outputs)
โš ๏ธDaily Job Updates
โš ๏ธHackathon Updates & Solutions

Buy ads: https://telega.io/c/cs_algo
Download Telegram
Batch :2023/2024/2025 passouts

Looking for interns with good understanding of Dockers, deployments, Kubernetes, open-source ecosystem, data structures, and coding (Java and Go) to work on the CVEs.
If you are passionate about cloud-native technology and want to work in an environment that offers excellent opportunities to learn from senior mentors, please share your resume with me at megha@nirmata.com.
Batch :2023/2024/2025 passouts

Looking for interns with good understanding of Dockers, deployments, Kubernetes, open-source ecosystem, data structures, and coding (Java and Go) to work on the CVEs.
If you are passionate about cloud-native technology and want to work in an environment that offers excellent opportunities to learn from senior mentors, please share your resume with me at megha@nirmata.com.
๐Ÿ‘1
Date: 27/12/2023
Company name: Walmart
Role: ML Engineer
Topic: machine learning


1. Explain the difference between L1 and L2 regularization.

Answer: L2 regularization tends to spread error among all the terms, while L1 is more binary/sparse, with many variables either being assigned a 1 or 0 in weighting. L1 corresponds to setting a Laplacean prior on the terms, while L2 corresponds to a Gaussian prior.

2. What cross-validation technique would you use on a time series dataset?


Answer: Instead of using standard k-folds cross-validation, you have to pay attention to the fact that a time series is not randomly distributed dataโ€”it is inherently ordered by chronological order. If a pattern emerges in later time periods, for example, your model may still pick up on it even if that effect doesnโ€™t hold in earlier years!
Youโ€™ll want to do something like forward chaining where youโ€™ll be able to model on past data then look at forward-facing data.
โ€ข Fold 1 : training [1], test [2]
โ€ข Fold 2 : training [1 2], test [3]
โ€ข Fold 3 : training [1 2 3], test [4]
โ€ข Fold 4 : training [1 2 3 4], test [5]
โ€ข Fold 5 : training [1 2 3 4 5], test [6]

3. Whatโ€™s the โ€œkernel trickโ€ and how is it useful?

Answer: The Kernel trick involves kernel functions that can enable in higher-dimension spaces without explicitly calculating the coordinates of points within that dimension: instead, kernel functions compute the inner products between the images of all pairs of data in a feature space. This allows them the very useful attribute of calculating the coordinates of higher dimensions while being computationally cheaper than the explicit calculation of said coordinates.

4. Explain how a ROC curve works.

Answer: The ROC curve is a graphical representation of the contrast between true positive rates and the false positive rate at various thresholds. Itโ€™s often used as a proxy for the trade-off between the sensitivity of the model (true positives) vs the fall-out or the probability it will trigger a false alarm (false positives).

โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”โ€”-


Stay Safe & Happy Learning๐Ÿ’™