Data Engineers
8.8K subscribers
343 photos
74 files
335 links
Free Data Engineering Ebooks & Courses
Download Telegram
We are now on WhatsApp as well

Follow for more data engineering resources: ๐Ÿ‘‡ https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
๐Ÿ‘4โค1๐Ÿ”ฅ1
๐Ÿ”ฅ1
Data Engineer Interview Questions for Entry-Level Data Engineer๐Ÿ”ฅ


1. What are the core responsibilities of a data engineer?

2. Explain the ETL process

3. How do you handle large datasets in a data pipeline?

4. What is the difference between a relational & a non-relational database?

5. Describe how data partitioning improves performance in distributed systems

6. What is a data warehouse & how is it different from a database?

7. How would you design a data pipeline for real-time data processing?

8. Explain the concept of normalization & denormalization in database design

9. What tools do you commonly use for data ingestion, transformation & storage?

10. How do you optimize SQL queries for better performance in data processing?

11. What is the role of Apache Hadoop in big data?

12. How do you implement data security & privacy in data engineering?

13. Explain the concept of data lakes & their importance in modern data architectures

14. What is the difference between batch processing & stream processing?

15. How do you manage & monitor data quality in your pipelines?

16. What are your preferred cloud platforms for data engineering & why?

17. How do you handle schema changes in a production data pipeline?

18. Describe how you would build a scalable & fault-tolerant data pipeline

19. What is Apache Kafka & how is it used in data engineering?

20. What techniques do you use for data compression & storage optimization?
โค4
Here are three PySpark questions:


Scenario 1: Data Aggregation


Interviewer: "How would you aggregate data by category and calculate the sum of sales, handling missing values and grouping by multiple columns?"


Candidate:


# Load the DataFrame
df = spark.read.csv("path/to/data.csv", header=True, inferSchema=True)

# Handle missing values
df_filled = df.fillna(0)

# Aggregate data
from pyspark.sql.functions import sum, col
df_aggregated = df_filled.groupBy("category", "region").agg(sum(col("sales")).alias("total_sales"))

# Sort the results
df_aggregated_sorted = df_aggregated.orderBy("total_sales", ascending=False)

# Save the aggregated DataFrame
df_aggregated_sorted.write.csv("path/to/aggregated/data.csv", header=True)


Scenario 2: Data Transformation


Interviewer: "How would you transform a DataFrame by converting a column to timestamp, handling invalid dates and extracting specific date components?"


Candidate:


# Load the DataFrame
df = spark.read.csv("path/to/data.csv", header=True, inferSchema=True)

# Convert column to timestamp
from pyspark.sql.functions import to_timestamp, col
df_transformed = df.withColumn("date_column", to_timestamp(col("date_column"), "yyyy-MM-dd"))

# Handle invalid dates
df_transformed_filtered = df_transformed.filter(col("date_column").isNotNull())

# Extract date components
from pyspark.sql.functions import year, month, dayofmonth
df_transformed_extracted = df_transformed_filtered.withColumn("year", year(col("date_column"))).withColumn("month", month(col("date_column"))).withColumn("day", dayofmonth(col("date_column")))

# Save the transformed DataFrame
df_transformed_extracted.write.csv("path/to/transformed/data.csv", header=True)

Scenario 3: Data Partitioning


Interviewer: "How would you partition a large DataFrame by date and save it to parquet format, handling data skewness and optimizing storage?"


Candidate:


# Load the DataFrame
df = spark.read.csv("path/to/data.csv", header=True, inferSchema=True)

# Partition by date
df_partitioned = df.repartitionByRange("date_column")

# Save to parquet format
df_partitioned.write.parquet("path/to/partitioned/data.parquet", partitionBy=["date_column"])

# Optimize storage
df_partitioned.write.option("compression", "snappy").parquet("path/to/partitioned/data.parquet", partitionBy=["date_column"])

Here, you can find Data Engineering Resources ๐Ÿ‘‡
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C

All the best ๐Ÿ‘๐Ÿ‘
๐Ÿ‘6โค5
fundamentals-of-data-engineering.pdf
7.6 MB
๐Ÿš€ The good book to start learning Data Engineering.

โš You can download it for free here

โš™With this practical #book, you'll learn how to plan and build systems to serve the needs of your organization and your customers by evaluating the best technologies available through the framework of the #data #engineering lifecycle.
๐Ÿ‘5โค2
Life of a Data Engineer.....


Business user : Can we add a filter on this dashboard. This will help us track a critical metric.
me : sure this should be a quick one.

Next day :

I quickly opened the dashboard to find the column in the existing dashboard's data sources.  -- column not found

Spent a couple of hours to identify the data source and how to bring the column into the existence data pipeline which feeds the dashboard( table granularity , join condition etc..).

Then comes the pipeline changes , data model changes , dashboard changes , validation/testing.

Finally deploying to production and a simple email to the user that the filter has been added.

A small change in the front end but a lot of work in the backend to bring that column to life.

Never underestimate data engineers and data pipelines ๐Ÿ’ช
๐Ÿ‘5๐Ÿ”ฅ1
Don't aim for this:

SQL - 100%
Python - 0%
PySpark - 0%
Cloud - 0%

Aim for this:

SQL - 25%
Python - 25%
PySpark - 25%
Cloud - 25%

You don't need to know everything straight away.

Here, you can find Data Engineering Resources ๐Ÿ‘‡
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C

All the best ๐Ÿ‘๐Ÿ‘
โค9๐Ÿ‘4
๐Ÿ”ฅ ETL vs ELT: What's the Difference?

When it comes to data processing, two key approaches stand out: ETL and ELT. Both involve transforming data, but the processes differ significantly!

๐Ÿ”น ETL (Extract, Transform, Load)
- Extract data from various sources (databases, APIs, etc.)
- Transform data before loading it into the storage (cleaning, aggregating, formatting)
- Load the transformed data into the data warehouse (DWH)

โœ๏ธ Key point: Data is transformed before being loaded into the storage.

๐Ÿ”น ELT (Extract, Load, Transform)
- Extract data from sources
- Load raw data into the data warehouse
- Transform the data after it's loaded, using the power of the data warehouseโ€™s computational resources

โœ๏ธ Key point: Data is loaded into the storage first, and transformation happens afterward.

๐ŸŽฏ When to use which?
- ETL is ideal for structured data and traditional systems where pre-processing is crucial.
- ELT is better suited for handling large volumes of data in modern cloud-based architectures.

Which one works best for your project? ๐Ÿค”
๐Ÿ‘4๐Ÿ”ฅ4๐Ÿฅฐ1
Join our WhatsApp channel for more data engineering resources
๐Ÿ‘‡๐Ÿ‘‡
https://whatsapp.com/channel/0029Vaovs0ZKbYMKXvKRYi3C
๐Ÿ‘6
Importance of ETL.pdf
3.2 MB
Importance of ETL.pdf
๐Ÿ‘11
DevOps Engineering
๐Ÿ‘5๐Ÿ”ฅ2โค1
Working with PySpark Aggregations

What are Aggregations?

Aggregations in PySpark allow you to transform large datasets by computing statistics across specified groups. PySpark offers built-in functions for common aggregations, such as sum, avg, min, max, count, and more.

Common Aggregation Methods in PySpark

1. groupBy(): Groups data by one or more columns and allows applying aggregation functions on each group.

2. agg(): Lets you apply multiple aggregation functions simultaneously.

3. count(): Counts the number of non-null entries.

4. sum(): Adds up the values in a column.

5. avg(): Computes the average of a column.

Example: Using groupBy() and Aggregations

Letโ€™s say you have a DataFrame with sales data and want to calculate the total and average sales per salesperson.

from pyspark.sql import SparkSession
from pyspark.sql.functions import sum, avg

# Create Spark session
spark = SparkSession.builder.appName("AggregationExample").getOrCreate()

# Sample data
data = [("Alice", 100), ("Alice", 150), ("Bob", 200), ("Bob", 300)]
df = spark.createDataFrame(data, ["Salesperson", "Sales_Amount"])

# Aggregating data
agg_df = df.groupBy("Salesperson").agg(
sum("Sales_Amount").alias("Total_Sales"),
avg("Sales_Amount").alias("Avg_Sales")
)

agg_df.show()

In this example, we used groupBy("Salesperson") to group the data by each salesperson, and agg() to calculate the total and average sales for each.

Real-World Example: Aggregating Product Sales Data

Imagine you're analyzing sales data for a retail store. You might want to know the total sales per product category, the highest and lowest sales amounts, or the average sales per transaction. Aggregations allow you to gain these insights quickly:

# Group by product category and calculate total and average sales
sales_df.groupBy("Product_Category").agg(
sum("Sales_Amount").alias("Total_Sales"),
avg("Sales_Amount").alias("Avg_Sales")
).show()

Advanced Aggregation Functions

countDistinct(): Counts unique values in a column.

df.groupBy("Salesperson").agg(countDistinct("Product_ID").alias("Unique_Products_Sold")).show()

approx_count_distinct(): Uses an approximate algorithm to count distinct values, useful for very large datasets.

from pyspark.sql.functions import approx_count_distinct
df.agg(approx_count_distinct("Product_ID")).show()

Windowed Aggregations

Sometimes, aggregations are performed over a โ€œwindowโ€ rather than over the entire dataset or specific groups. Weโ€™ve covered window functions, but itโ€™s useful to know they can be combined with aggregations for tasks like rolling averages.
๐Ÿ‘6
Life of a Data Engineer.....


Business user : Can we add a filter on this dashboard. This will help us track a critical metric.
me : sure this should be a quick one.

Next day :

I quickly opened the dashboard to find the column in the existing dashboard's data sources.  -- column not found

Spent a couple of hours to identify the data source and how to bring the column into the existence data pipeline which feeds the dashboard( table granularity , join condition etc..).

Then comes the pipeline changes , data model changes , dashboard changes , validation/testing.

Finally deploying to production and a simple email to the user that the filter has been added.

A small change in the front end but a lot of work in the backend to bring that column to life.

Never underestimate data engineers and data pipelines ๐Ÿ’ช
๐Ÿ‘4โค2
Top 100 SQL Interview Questions.pdf
408.3 KB
Top 100 SQL Interview Questions.pdf
๐Ÿ‘6
SQL ASSIGNMENT

#Check your fundamental knowledge
๐Ÿ‘4
๐— ๐—ฎ๐˜€๐˜๐—ฒ๐—ฟ ๐—ฆ๐—ค๐—Ÿ ๐—ณ๐—ผ๐—ฟ ๐—œ๐—ป๐˜๐—ฒ๐—ฟ๐˜ƒ๐—ถ๐—ฒ๐˜„๐˜€, ๐—™๐—ฎ๐˜€๐˜!

Here are 10 must-know SQL concepts:

โ— Stored Procedure vs. Function
Procedures allow DML; functions handle calculations only.

โ— Clustered vs. Non-Clustered Index
Clustered sorts data physically; non-clustered creates pointers.

โ— DELETE vs. TRUNCATE
DELETE is row-specific; TRUNCATE clears all rows fast.

โ— WHERE vs. HAVING
WHERE filters rows; HAVING filters after GROUP BY.

โ— Primary Key vs. Unique Key
Primary is unique & non-null; Unique allows one null.

โ— JOIN Types
INNER, LEFT, RIGHT, FULL JOINโ€”combine tables in different ways.

โ— Normalization Forms
Minimizes redundancy and improves data integrity.

โ— ACID Properties
Ensures reliable transactions with Atomicity, Consistency, Isolation, Durability.

โ— Indexes
Speeds up data retrieval; careful use is key.

โ— Subqueries
Nest queries within queries for flexible data retrieval.

Master these, and youโ€™re SQL-interview ready!
๐Ÿ‘10โค2
๐Ÿ“Œ10 intermediate-level SQL interview questions

1. How would you find the nth highest salary in a table?
2. What is the difference between JOIN, LEFT JOIN, RIGHT JOIN, and FULL JOIN?
3. How would you calculate cumulative sum in SQL?
4. How do you identify duplicate records in a table?
5. Explain the concept of a window function and give examples.
6. How would you retrieve records between two dates in SQL?
7. What is the difference between UNION and UNION ALL?
8. How can you pivot data in SQL?
9. Explain the use of CASE statements in SQL.
10. How do you use common table expressions (CTEs)?

#sql
๐Ÿ‘2