Data Engineering / Инженерия данных / Data Engineer / DWH
1.89K subscribers
49 photos
7 videos
52 files
348 links
Data Engineering: ETL / DWH / Data Pipelines based on Open-Source software. Инженерия данных.

DWH / SQL
Python / ETL / ELT / dbt / Spark
Apache Airflow

Рекламу не размещаю
Вопросы: @iv_shamaev | datatalks.ru
Download Telegram
Data Engineering

A collection of one-off topics or videos that do not fall neatly into any other existing playlist.

1. A Brief History of Data Engineering | What is Data Engineering?
2. How to Become a Data Engineer (with no experience)
3. ETL vs ELT | Modern Data Architectures
4. YAML Tutorial | Learn YAML in 10 Minutes
5. What is Data Streaming?
6. 3 Must-Know Trends for Data Engineers | DataOps
7. What skills do you need as a Data Engineer?
8. What is Reverse ETL?
9. What tools should you know as a Data Engineer?
10. Intro to BASH // Command Line for Beginners
11. Getting Started w/ Airbyte! | Open Source Data Integration
12. Data Warehouse vs Data Lake | Explained (non-technical)
13. Data Modeling in the Modern Data Stack
14. Getting Started w/ Metabase | Open Source Data Visualization Tool
15. What do you actually do as a data engineer?

👉 @devops_dataops

https://www.youtube.com/playlist?list=PLy4OcwImJzBKg3rmROyI_CBBAYlQISkOO
👍2
Forwarded from karpov.courses
У нас хорошие новости: мы сделали бесплатный курс по Docker.

Docker применяется в Data Science, разработке, инженерии данных и даже тестировании! Уверены, программа будет полезна всем, кто пишет код и работает с приложениями.

Вы научитесь:

● заворачивать собственные приложения в контейнеры;
● локально разворачивать готовые сервисы: Airflow, Postgres, ClickHouse, Nginx;
● поднимать и настраивать полноценные веб-приложения.

Программа даст вам базовые знания, с которыми можно будет сделать шаг навстречу ещё более интересным инструментам — например, Kubernetes.

Автор курса – Антон Сидорин, бэкенд-разработчик karpov.соurses.
Начать учиться можно в любое удобное время.

[Познакомиться с Docker]
👍3
(DataCamp) Introduction to Airflow in Python

This is a memo to share what I have learnt in Apache Airflow, capturing the learning objectives as well as my personal notes. The course is taught by Mike Metzger from DataCamp, and it includes 4 chapters:
▫️ Intro to Airflow
▫️ Implementing Airflow DAGs
▫️ Maintaining and monitoring Airflow workflows
▫️ Building production pipelines in Airflow

https://github.com/JNYH/DataCamp_Introduction_to_Airflow

Personal Notes:
https://medium.com/swlh/introduction-to-airflow-in-python-67b554f06f0b
Через 30 минут начнётся move data конференция (в 21 по мск)

https://movedata.airbyte.com/
Airflow Tutorial for Beginners - Full Course in 2 Hours 2022

Throughout the course, you will learn:
00:00 - Airflow Introduction
03:06 - Run Airflow in Python Env
10:44 - Run Airflow in Docker
17:55 - Airflow Basics and Core Concepts
21:55 - Airflow Task Lifecycle
26:19 - Airflow Basic Architecture
28:14 - Airflow DAG with Bash Operator
40:09 - Airflow DAG with Python Operator
45:04 - Data Sharing via Airflow XComs
52:53 - Airflow Task Flow API
57:56 - Airflow Catch-Up and Backfill
01:02:09 - Airflow Scheduler with Cron Expression
01:07:25 - Airflow Connection to Postgres
01:08:58 - Airflow Postgres Operator
01:19:30 - Airflow Docker Install Python Package 2 ways
01:29:34 - Airflow AWS S3 Sensor Operator
01:42:37 - Airflow Hooks S3 PostgreSQL
02:00:43 - Course Bonus

https://www.youtube.com/watch?v=K9AnJ9_ZAXE
👍6
Apache Spark / PySpark Tutorial: Basics In 15 Mins

This video gives an introduction to the Spark ecosystem and world of Big Data, using the Python Programming Language and its PySpark API. We also discuss the idea of parallel and distributed computing, and computing on a cluster of machines.

https://youtu.be/QLQsW8VbTN4
👍2
Source: https://www.linkedin.com/posts/timo-dechau_in-our-little-data-world-are-we-naming-things-activity-6925303646817529856-Nu-U/
---
In our little data world are we naming things too much based on our marketing perspective. And is there serious over-selling going on.

Maybe yes.

Let’s do some examples:

dbt is not a data model tool. I see this notion quite often. It’s first a SQL orchestration and testing tool. Of course, I can use it to build and manage a data model. But this requires me to do the thinking not dbt

Snowflake and BigQuery are not data warehouses. Great people like .Rogier Werschkull. and Chad Sanderson remind us about that. They are analytical databases in the cloud. Of course, you can build a data warehouse with them. But this requires you to come up with a concept and architecture.

Fivetran and Airbyte are not ELT tools - they extract and load for you. And you are in charge of the transformation. They are basically supermarkets with self-checkout. Great idea but you have to do more.

Segment and Rudderstack are not really CDPs - Arpit Choudhury has written a great piece about it - they are customer data infrastructure, the collection and identity stitching layer

Reverse ETL is just ETL


Why is this important?

Because often these labels create expectations about the solution that these tools can’t fulfill.

When I set up Snowflake and think that I have a data warehouse now - I create huge expectations in my organization that I can’t fulfill.

Same with dbt - Ok, we need a data model, let’s use dbt for this. And then you add one sql file to the next one and call it a model.

Tools are tools, just that.