L̶u̵m̶i̵n̷o̴u̶s̶m̶e̵n̵B̶l̵o̵g̵
504 subscribers
156 photos
32 videos
2 files
700 links
(ノ◕ヮ◕)ノ*:・゚✧ ✧゚・: *ヽ(◕ヮ◕ヽ)

helping robots conquer the earth and trying not to increase entropy using Python, Data Engineering and Machine Learning

http://luminousmen.com

License: CC BY-NC-ND 4.0
Download Telegram
Spark configuration

There are many ways to set configuration properties in Spark. I keep getting confused as to which is the best place to put it.

Among all the ways you can set Spark properties, the priority order determines which values will be respected.

Based on the loading order:

▪️Any values or flags defined in the spark-defaults.conf file will be read first

▪️Then the values specified in the command line using the spark-submit or spark-shell

▪️Finally, the values set through SparkSession in the Spark application.

All these properties will be merged, with all duplicate properties discarded in the Spark application. Thus, for example, the values provided on the command line will override the settings in the configuration file, if they are not overwritten in the application itself.

#spark #big_data
PySpark documentation will follow numpydoc style. I do not see why — current Python docs for Spark always were ok. More readable than any of the java docs.

So this:

"""Specifies some hint on the current :class:DataFrame.

:param name: A name of the hint.
:param parameters: Optional parameters.
:return: :class:DataFrame


will be something like this:

"""Specifies some hint on the current :class:DataFrame.

:param name: A name of the hint.
:param parameters: Optional parameters.
:return: :class:DataFrame


will be something like this:

"""Specifies some hint on the current :class:DataFrame.

Parameters
----------
name : str
A name of the hint.
parameters : dict, optional
Optional parameters

Returns
-------
DataFrame


Probably it's gonna be more readable HTML and linking between pages. Will see.

#spark #python
PySpark configuration provides the spark.python.worker.reuse option which can be used to choose between forking Python process for each task and reusing the existing process. In it equals to true the process pool will be created and reuse on the executors. It should be useful to avoid expensive serialization, transfer data between JVM and Python and even garbage collection.

Although, it is more an impression than a result of systematic tests.

#spark #big_data
partitionOverwriteMode

Sometimes it is necessary to overwrite some failed partitions that Spark failed to process and you need to run the job again to have all the data written correctly.

There are two options here:

1. process and overwrite all data
2. process and overwrite data for the relevant partition

The first option sounds very dumb - to do all the work all over again. But to do the second option you need to rewrite the job. Meh - more code means more problems.
Luckily spark has a parameter spark.sql.sources.partitionOverwriteMode with option dynamic. This only overwrites data for partitions present in the current batch.

This configuration works well in cases where it is possible to overwrite external table metadata with a simple CREATE EXTERNAL TABLE when writing data to an external data store such as HDFS or S3.

#spark #big_data
I've combined my experience on publishing book into the post
Boston Dynamics has shown how the robot dog Spot's arm works.

Now that Spot has an arm in addition to its legs and cameras, it can do mobile copying. He finds and picks up objects (trash), cleans the living room, opens doors, operates switches and valves, tends the garden and generally has fun.

https://youtu.be/6Zbhvaac68Y
Twitter is opening up its full tweet archive to academic researchers for free.

Company is now opening up access to independent researchers or journalists. You’ll have to be a student or part of an academic institution.

Twitter also says it will not be providing access to data from accounts that have been suspended or banned, which could complicate efforts to study hate speech, misinformation, and other types of conversations that violate Twitter rules.

theverge
I can't resist sharing the basic principles of OOP
​​Folks at AWS publish a really great resource for anyone, who is designing cloud architecture. Even if you are using already or thinking about Azure or GCP, it is a really good read and it is not your typical sleep-provoking dry white-paper.

AWS did an awesome job packing a lot of practical recommendations, best practices, tips, and suggestions into this document.

AWS Well-architected framework focuses on 5 pillars:

✓Operational Excellence
✓Security
✓Reliability
✓Performance Efficiency
✓Cost Optimization

#aws #big_data
For any error you can say the cause of it is between the monitor and the chair — and it's true, but it doesn't help fix the error in any way. To stand out today you need to bring both hard & soft skills to the table.

https://luminousmen.com/post/soft-skills-guide-for-software-engineer

#soft_skills
Often I rewrite my old articles to align them with my current understanding and very often find myself misunderstanding concepts or omitting important questions.

But sometimes when I rewrite, I realize that originally everything was correct - I'm the one thinking bullshit. It's a funny feeling.

It's exhausting to be a perfectionist. Don't be one. But read the article:

Data Lake vs Data Warehouse
Big Data and Go. It's started

Hdfs client written in go. Good for scripting. Since it doesn't have to wait for the JVM to start up, it's also a lot faster than hadoop -fs.

https://github.com/colinmarc/hdfs

#big_data
Forwarded from Инжиниринг Данных (Dmitry Anoshin)
Kaggle State of Machine Learning and Data Science 2020.pdf
14 MB
Kaggle State of Machine Learning and Data Science 2020
What does the not operator do? It simply yields True if its argument is false, False otherwise. It turns out it's pretty hard to determine what true is.

When you look at the C implementation, the rule seems to be:
1. If True, then True;
2. If False, then False;
3. If None, then False;
4. Whatever __bool__ returns as long as it's a subclass of bool;
5. Calling len() on the object - True if greater than 0, otherwise False;
7. If none of the above applies, then True.

An in-depth article on the `not' operator in Python from the core developer

#python
Data engineering in 2020-2021

Another view on the Data Management landscape. There 9 mentions of SQL and 5 mentions of BI in the article. SQL is required knowledge for data engineer by it's not in any way the only requirement nowadays.

The author sees the future of Data Management as a way towards SQL-engines and outsource the complexity to the platforms. Unfortunately that's probably true.

Although:
▪️In practice, engineers spend most of the time on letter "T" in ETL(and not only using SQL). For example, the most popular framework for data processing Spark is much more than just RDDs today

▪️Those emerging platforms cost a pile of money now. For example AWS was born because of Oracle platform huge maintanance cost.

▪️I’m very sceptical of tools that clams “everyone can build a data product in several easy steps”.

Article
PEP: 585

Started trying out the new release Python 3.9. I don't follow the features that much, but there are things that piss me off, like the implementation of static typing in Python.

Static typing has been built on top of the existing Python runtime incrementally over the time. As a consequence, collection hierarchies got duplicated, as an application could use the types from typing module at the same time as the built-in ones.

This created a bit of confusion, as we had two parallel type systems, not really competing with each other, but we always had to keep an eye out for that parallelism.

Well, now this is over.

Examples of types that previously had to be imported to use would be List, Dict, Set, Tuple, Optional. Right now, you can just import them as a general list or dict, set, tuple, optional, etc.

>>> issubclass(list, T.List)
True


These types can also be parameterized. A parameterized type is an example of a generic universal type with expected types for container elements of type list[str].

PEP 585

#python