Data Engineers
9.24K subscribers
314 photos
76 files
298 links
Free Data Engineering Ebooks & Courses
Download Telegram
Data Engineers
Python Interview.pdf
Top 100 Python Interview Questions 🚀🔥
1
Common Data Cleaning Techniques for Data Analysts

Remove Duplicates:

Purpose: Eliminate repeated rows to maintain unique data.

Example: SELECT DISTINCT column_name FROM table;


Handle Missing Values:

Purpose: Fill, remove, or impute missing data.

Example:

Remove: df.dropna() (in Python/Pandas)

Fill: df.fillna(0)


Standardize Data:

Purpose: Convert data to a consistent format (e.g., dates, numbers).

Example: Convert text to lowercase: df['column'] = df['column'].str.lower()


Remove Outliers:

Purpose: Identify and remove extreme values.

Example: df = df[df['column'] < threshold]


Correct Data Types:

Purpose: Ensure columns have the correct data type (e.g., dates as datetime, numeric values as integers).

Example: df['date'] = pd.to_datetime(df['date'])


Normalize Data:

Purpose: Scale numerical data to a standard range (0 to 1).

Example: from sklearn.preprocessing import MinMaxScaler; df['scaled'] = MinMaxScaler().fit_transform(df[['column']])


Data Transformation:

Purpose: Transform or aggregate data for better analysis (e.g., log transformations, aggregating columns).

Example: Apply log transformation: df['log_column'] = np.log(df['column'] + 1)


Handle Categorical Data:

Purpose: Convert categorical data into numerical data using encoding techniques.

Example: df['encoded_column'] = pd.get_dummies(df['category_column'])


Impute Missing Values:

Purpose: Fill missing values with a meaningful value (e.g., mean, median, or a specific value).

Example: df['column'] = df['column'].fillna(df['column'].mean())

I have curated best 80+ top-notch Data Analytics Resources 👇👇
https://whatsapp.com/channel/0029VaGgzAk72WTmQFERKh02

Like this post for more content like this 👍♥️

Share with credits: https://t.me/sqlspecialist

Hope it helps :)
3
Top 20 #SQL INTERVIEW QUESTIONS

1️⃣ Explain Order of Execution of SQL query
2️⃣ Provide a use case for each of the functions Rank, Dense_Rank & Row_Number ( 💡 majority struggle )
3️⃣ Write a query to find the cumulative sum/Running Total
4️⃣ Find the Most selling product by sales/ highest Salary of employees
5️⃣ Write a query to find the 2nd/nth highest Salary of employees
6️⃣ Difference between union vs union all
7️⃣ Identify if there any duplicates in a table
8️⃣ Scenario based Joins question, understanding of Inner, Left and Outer Joins via simple yet tricky question
9️⃣ LAG, write a query to find all those records where the transaction value is greater then previous transaction value
1️⃣ 0️⃣ Rank vs Dense Rank, query to find the 2nd highest Salary of employee
( Ideal soln should handle ties)
1️⃣ 1️⃣ Write a query to find the Running Difference (Ideal sol'n using windows function)
1️⃣ 2️⃣ Write a query to display year on year/month on month growth
1️⃣ 3️⃣ Write a query to find rolling average of daily sign-ups
1️⃣ 4️⃣ Write a query to find the running difference using self join (helps in understanding the logical approach, ideally this question is solved via windows function)
1️⃣ 5️⃣ Write a query to find the cumulative sum using self join
(you can use windows function to solve this question)
1️⃣6️⃣ Differentiate between a clustered index and a non-clustered index?
1️⃣7️⃣ What is a Candidate key?
1️⃣8️⃣What is difference between Primary key and Unique key?
1️⃣9️⃣What's the difference between RANK & DENSE_RANK in SQL?
2️⃣0️⃣ Whats the difference between LAG & LEAD in SQL?

Access SQL Learning Series for Free: https://t.me/sqlspecialist/523

Hope it helps :)
1
SQL Cheatsheet 📝

This SQL cheatsheet is designed to be your quick reference guide for SQL programming. Whether you’re a beginner learning how to query databases or an experienced developer looking for a handy resource, this cheatsheet covers essential SQL topics.

1. Database Basics
- CREATE DATABASE db_name;
- USE db_name;

2. Tables
- Create Table: CREATE TABLE table_name (col1 datatype, col2 datatype);
- Drop Table: DROP TABLE table_name;
- Alter Table: ALTER TABLE table_name ADD column_name datatype;

3. Insert Data
- INSERT INTO table_name (col1, col2) VALUES (val1, val2);

4. Select Queries
- Basic Select: SELECT * FROM table_name;
- Select Specific Columns: SELECT col1, col2 FROM table_name;
- Select with Condition: SELECT * FROM table_name WHERE condition;

5. Update Data
- UPDATE table_name SET col1 = value1 WHERE condition;

6. Delete Data
- DELETE FROM table_name WHERE condition;

7. Joins
- Inner Join: SELECT * FROM table1 INNER JOIN table2 ON table1.col = table2.col;
- Left Join: SELECT * FROM table1 LEFT JOIN table2 ON table1.col = table2.col;
- Right Join: SELECT * FROM table1 RIGHT JOIN table2 ON table1.col = table2.col;

8. Aggregations
- Count: SELECT COUNT(*) FROM table_name;
- Sum: SELECT SUM(col) FROM table_name;
- Group By: SELECT col, COUNT(*) FROM table_name GROUP BY col;

9. Sorting & Limiting
- Order By: SELECT * FROM table_name ORDER BY col ASC|DESC;
- Limit Results: SELECT * FROM table_name LIMIT n;

10. Indexes
- Create Index: CREATE INDEX idx_name ON table_name (col);
- Drop Index: DROP INDEX idx_name;

11. Subqueries
- SELECT * FROM table_name WHERE col IN (SELECT col FROM other_table);

12. Views
- Create View: CREATE VIEW view_name AS SELECT * FROM table_name;
- Drop View: DROP VIEW view_name;
4
ML Engineer vs AI Engineer

ML Engineer / MLOps

-Focuses on the deployment of machine learning models.
-Bridges the gap between data scientists and production environments.
-Designing and implementing machine learning models into production.
-Automating and orchestrating ML workflows and pipelines.
-Ensuring reproducibility, scalability, and reliability of ML models.
-Programming: Python, R, Java
-Libraries: TensorFlow, PyTorch, Scikit-learn
-MLOps: MLflow, Kubeflow, Docker, Kubernetes, Git, Jenkins, CI/CD tools

AI Engineer / Developer

- Applying AI techniques to solve specific problems.
- Deep knowledge of AI algorithms and their applications.
- Developing and implementing AI models and systems.
- Building and integrating AI solutions into existing applications.
- Collaborating with cross-functional teams to understand requirements and deliver AI-powered solutions.
- Programming: Python, Java, C++
- Libraries: TensorFlow, PyTorch, Keras, OpenCV
- Frameworks: ONNX, Hugging Face
2
If you want to Excel as a Data Analyst and land a high-paying job, master these essential skills:

1️⃣ Data Extraction & Processing:
SQL – SELECT, JOIN, GROUP BY, CTE, WINDOW FUNCTIONS
Python/R for Data Analysis – Pandas, NumPy, Matplotlib, Seaborn
Excel – Pivot Tables, VLOOKUP, XLOOKUP, Power Query

2️⃣ Data Cleaning & Transformation:
Handling Missing Data – COALESCE(), IFNULL(), DROPNA()
Data Normalization – Removing duplicates, standardizing formats
ETL Process – Extract, Transform, Load

3️⃣ Exploratory Data Analysis (EDA):
Descriptive Statistics – Mean, Median, Mode, Variance, Standard Deviation
Data Visualization – Bar Charts, Line Charts, Heatmaps, Histograms

4️⃣ Business Intelligence & Reporting:
Power BI & Tableau – Dashboards, DAX, Filters, Drill-through
Google Data Studio – Interactive reports

5️⃣ Data-Driven Decision Making:
A/B Testing – Hypothesis testing, P-values
Forecasting & Trend Analysis – Time Series Analysis
KPI & Metrics Analysis – ROI, Churn Rate, Customer Segmentation

6️⃣ Data Storytelling & Communication:
Presentation Skills – Explain insights to non-technical stakeholders
Dashboard Best Practices – Clean UI, relevant KPIs, interactive visuals

7️⃣ Bonus: Automation & AI Integration
SQL Query Optimization – Improve query performance
Python Scripting – Automate repetitive tasks
ChatGPT & AI Tools – Enhance productivity

Like this post if you need a complete tutorial on all these topics! 👍❤️

Share with credits: https://t.me/sqlspecialist

Hope it helps :)

#dataanalysts
4
Data Engineers – Don’t Just Learn Tools. Learn This:

So you’re learning:
– Spark
– Airflow
– dbt
– Kafka

But here’s a hard truth 👇
🧠 Tools change. Principles don’t.

Top 1% Data Engineers focus on:

🔸 Data modeling – Understand star vs snowflake, SCDs, normalization.
🔸 Data contracts – Build reliable pipelines, not spaghetti code.
🔸 System design – Think like a backend engineer. Learn how data flows.
🔸 Observability – Logging, metrics, lineage. Be the one who finds data bugs.

💥 Want to level up? Do this:
Build a mini data warehouse from scratch (on DuckDB + Airflow)
Join open-source data eng projects
Read “The Data Engineering Cookbook” (free)

📈 Don’t just run pipelines. Architect them.
6
If I were planning for Data Engineering interviews in the upcoming months then I will prepare this way



1. Learn important SQL concepts
Go through all key topics in SQL like joins, CTEs, window functions, group by, having etc.

2. Solve 50+ recently asked SQL queries
Practice queries from real interviews. focus on tricky joins, aggregations and filtering.

3. Solve 50+ Python coding questions
Focus on:

List, dictionary, string problems, File handling, Algorithms (sorting, searching, etc.)


4. Learn PySpark basics
Understand: RDDs, DataFrames , Datasets & Spark SQL


5. Practice 20 top PySpark coding tasks
Work on real coding examples using PySpark -data filtering, joins, aggregations, etc.

6. Revise Data Warehousing concepts
Focus on:

Star and snowflake schema
Normalization and denormalization


7. Understand the data model used in your project
Know the structure of your tables and how they connect.


8. Practice explaining your project
Be ready to talk about: Architecture, Tools used, Pipeline flow & Business value


9. Review cloud services used in your project
For AWS, Azure, GCP:
Understand what services you used, why you used them nd how they work.

10. Understand your role in the project
Be clear on what you did technically . What problems you solved and how.

11. Prepare to explain the full data pipeline
From data ingestion to storage to processing - use examples.

12. Go through common Data Engineer interview questions
Practice answering questions about ETL, SQL, Python, Spark, cloud etc.

13. Read recent interview experiences
Check LinkedIn , GeeksforGeeks, Medium for company-specific interview experiences.

14. Prepare for high-level system design
questions.
5
Use of Machine Learning in Data Analytics
2
ETL vs ELT – Explained Using Apple Juice analogy! 🍎🧃

We often hear about ETL and ELT in the data world — but how do they actually apply in tools like Excel and Power BI?

Let’s break it down with a simple and relatable analogy 👇

ETL (Extract → Transform → Load)

🧃 First you make the juice, then you deliver it

➡️ Apples → Juice → Truck

🔹 In Power BI / Excel:

You clean and transform the data in Power Query
Then load the final data into your report or sheet
💡 That’s ETL – transformation happens before loading



ELT (Extract → Load → Transform)

🍏 First you deliver the apples, and make juice later

➡️ Apples → Truck → Juice

🔹 In Power BI / Excel:

You load raw data into your model or sheet
Then transform it using DAX, formulas, or pivot tables
💡 That’s ELT – transformation happens after loading
2
Adaptive Query Execution (AQE) in Apache Spark is a feature introduced to improve query performance dynamically at runtime, based on actual data statistics collected during execution.

This makes Spark smarter and more efficient, especially when dealing with real-world messy data where planning ahead (at compile time) might be misleading.

🔍 Importance of AQE in Spark
Runtime Optimization:

AQE adapts the execution plan on the fly using real-time stats, fixing issues that static planning can't predict.

Better Join Strategy:
If Spark detects at runtime that one table is smaller than expected, it can switch to a broadcast join instead of a slower shuffle join.

Improved Resource Usage:
By optimizing stage sizes and join plans, AQE avoids unnecessary shuffling and memory usage, leading to faster execution and lower cost.


🪓 Handling Data Skew with AQE
Data skew occurs when some partitions (e.g., specific keys) have much more data than others, slowing down those tasks.

AQE handles this using:

Skew Join Optimization:
AQE detects skewed partitions and breaks them into smaller sub-partitions, allowing Spark to process them in parallel instead of waiting on one giant slow task.

Automatic Repartitioning:
It can dynamically adjust partition sizes for better load balancing, reducing the "straggler" effect from skew.


💡 Example:
If a join key like customer_id = 12345 appears millions of times more than others, Spark can split just that key’s data into chunks, while keeping others untouched. This makes the whole join process more balanced and efficient.

In summary, AQE improves performance, handles skew gracefully, and makes Spark queries more resilient and adaptive—especially useful in big, uneven datasets.
⌨️ HTML Lists Knick Knacks

Here is a list of fun things you can do with lists in HTML 😁
1