What is the fastest way to build a string from many substrings in a loop? In other words, how to concatenate fast when we don't know in advance how many strings we have? There are many discussions about it, and the common advice is that strings are immutable, so it's better to use a list and then
The straightforward solution:
Using lists:
So, it's about the same. But we can go deeper. What about generator expressions?
A bit faster. What if we use list comprehensions instead?
Wow, this is 1.6x faster than what we had before. Can you make it faster?
And there should be a disclaimer:
1. Avoid premature optimization, value readability over performance when using a bit slower operation is tolerable.
2. If you think that something is slow, prove it first. It can be different in your case.
str.join
it. Let's not trust anyone and just check it.The straightforward solution:
%%timeit
s = ''
for _ in range(10*8):
s += 'a'
# 4.04 µs ± 256 ns per loop
Using lists:
%%timeit
a = []
for _ in range(10*8):
a.append('a')
''.join(a)
# 4.06 µs ± 144 ns per loop
So, it's about the same. But we can go deeper. What about generator expressions?
%%timeit
''.join('a' for _ in range(10*8))
# 3.56 µs ± 95.9 ns per loop
A bit faster. What if we use list comprehensions instead?
%%timeit
''.join(['a' for _ in range(10*8)])
# 2.52 µs ± 42.1 ns per loop
Wow, this is 1.6x faster than what we had before. Can you make it faster?
And there should be a disclaimer:
1. Avoid premature optimization, value readability over performance when using a bit slower operation is tolerable.
2. If you think that something is slow, prove it first. It can be different in your case.
from base64 import b64decode
from random import choice
CELLS = '~' * 12 + '¢•*@&.;,"'
def tree(max_width):
yield '/⁂\\'.center(max_width)
for width in range(3, max_width - 1, 2):
row = '/'
for _ in range(width):
row += choice(CELLS)
row += '\\'
yield row.center(max_width)
yield "'| |'".center(max_width)
yield " | | ".center(max_width)
yield '-' * max_width
title = b'SGFwcHkgTmV3IFllYXIsIEBweXRob25ldGMh'
yield b64decode(title).decode().center(max_width)
for row in tree(40):
print(row)
Today Guido van Rossum posted a Python riddle:
The answer is
The first tip is if you replace the class with a function, it will fail:
Why so? The answer can be found in the documentation (see Execution model):
> If a variable is used in a code block but not defined there, it is a free variable.
So,
Let's disassemble the snippet above:
It outputs a lot of different things, this is the part we're interested in:
Indeed,
This is the same dis part for the class:
So, the class scope behaves differently.
The same documentation page answers how this behavior is different:
> Class definition blocks and arguments to exec() and eval() are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace.
In other words, if a variable in the class definition is unbound, it is looked up in the
x = 0
y = 0
def f():
x = 1
y = 1
class C:
print(x, y) # What does this print?
x = 2
f()
The answer is
0 1
.The first tip is if you replace the class with a function, it will fail:
x = 0
y = 0
def f():
x = 1
y = 1
def f2():
print(x, y)
x = 2
f2()
f()
# UnboundLocalError: local variable 'x' referenced before assignment
Why so? The answer can be found in the documentation (see Execution model):
> If a variable is used in a code block but not defined there, it is a free variable.
So,
x
is a free variable but y
isn't, this is why behavior for them is different. And when you try to use a free variable, the code fails at runtime because you haven't defined it yet in the current scope but will define it later.Let's disassemble the snippet above:
import dis
dis.dis("""[insert here the previous snippet]""")
It outputs a lot of different things, this is the part we're interested in:
8 0 LOAD_GLOBAL 0 (print)
2 LOAD_FAST 0 (x)
4 LOAD_DEREF 0 (y)
6 CALL_FUNCTION 2
8 POP_TOP
Indeed,
x
and y
have different instructions, and they're different at bytecode-compilation time. Now, what's different for a class scope?import dis
dis.dis("""[insert here the first code snippet]""")
This is the same dis part for the class:
8 8 LOAD_NAME 3 (print)
10 LOAD_NAME 4 (x)
12 LOAD_CLASSDEREF 0 (y)
14 CALL_FUNCTION 2
16 POP_TOP
So, the class scope behaves differently.
x
and y
loaded with LOAD_FAST
and LOAD_DEREF
for a function and with LOAD_NAME
and LOAD_CLASSDEREF
for a class.The same documentation page answers how this behavior is different:
> Class definition blocks and arguments to exec() and eval() are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace.
In other words, if a variable in the class definition is unbound, it is looked up in the
global
namespace skipping enclosing nonlocal
scope.It was a long break but tomorrow we start again. We have plenty of ideas for posts but don't always have time to write them. So, this is how you can help us:
+ If you have something to tell about Python (syntax, stdlib, PEPs), check if it already was posted. If not, write a post, send it to us, and we will publish it. It will include your name (if you want to), we don't steal content ;)
+ If you don't have an idea, just contact us, we have plenty of them! And if you like it, the algorithm is the same as above: write a post, send it, we publish it with your name.
+ If you don't have time to write posts but still want to help, consider donating a bit of money, links are in the channel description. If we get enough, we can take a one-day vacation and invest it exclusively into writing posts.
+ If you see a bug or typo in a post, please, let us know!
And speaking about bugs, there are few in recent posts that our lovely subscribers have reported:
+ post #641, reported by @recursing.
+ post #644, reported by @el71Gato. It should be
Welcome into season 2.5 :)
+ If you have something to tell about Python (syntax, stdlib, PEPs), check if it already was posted. If not, write a post, send it to us, and we will publish it. It will include your name (if you want to), we don't steal content ;)
+ If you don't have an idea, just contact us, we have plenty of them! And if you like it, the algorithm is the same as above: write a post, send it, we publish it with your name.
+ If you don't have time to write posts but still want to help, consider donating a bit of money, links are in the channel description. If we get enough, we can take a one-day vacation and invest it exclusively into writing posts.
+ If you see a bug or typo in a post, please, let us know!
And speaking about bugs, there are few in recent posts that our lovely subscribers have reported:
+ post #641, reported by @recursing.
functools.cache
isn't faster than functools.lru_cache(maxsize=None)
, it is exactly the same. The confusion comes from the documentation which says "this is smaller and faster than lru_cache()
WITH A SIZE LIMIT".+ post #644, reported by @el71Gato. It should be
10**8
instead of 10*8
. We've re-run benchmarks with these values, relative numbers are the same, so all conclusions are still correct.Welcome into season 2.5 :)
Let's talk a bit more about scopes.
Any class and function can implicitly use variables from the global scope:
Or from any other enclosing scope, even if it is defined after the fucntion definition:
Class body is a tricky case. It is not considered an enclosing scope for functions defined inside of it:
Any class and function can implicitly use variables from the global scope:
v = 'global'
def f():
print(f'{v=}')
f()
# v='global'
Or from any other enclosing scope, even if it is defined after the fucntion definition:
def f():
v1 = 'local1'
def f2():
def f3():
print(f'{v1=}')
print(f'{v2=}')
v2 = 'local2'
f3()
f2()
f()
# v1='local1'
# v2='local2'
Class body is a tricky case. It is not considered an enclosing scope for functions defined inside of it:
v = 'global'
class A:
v = 'local'
print(f'A {v=}')
def f():
print(f'f {v=}')
# A v='local'
A.f()
# f v='global'
Any enclosing variable can be shadowed in the local scope without affecting the global one:
And if you try to use a variable and then shadow it, the code will fail at runtime:
If you want to re-define the global variable instead of locally shadowing it, it can be achieved using
Also,
To be said, using
v = 'global'
def f():
v = 'local'
print(f'f {v=}')
f()
# f v='local'
print(f'{v=}')
# v='global'
And if you try to use a variable and then shadow it, the code will fail at runtime:
v = 'global'
def f():
print(v)
v = 'local'
f()
# UnboundLocalError: local variable 'v' referenced before assignment
If you want to re-define the global variable instead of locally shadowing it, it can be achieved using
global
and nonlocal
statements:v = 'global'
def f():
global v
v = 'local'
print(f'f {v=}')
f()
# f v='local'
print(f'g {v=}')
# g v='local'
def f1():
v = 'non-local'
def f2():
nonlocal v
v = 'local'
print(f'f2 {v=}')
f2()
print(f'f1 {v=}')
f1()
# f2 v='local'
# f1 v='local'
Also,
global
can be used to skip non-local definitions:v = 'global'
def f1():
v = 'non-local'
def f2():
global v
print(f'f2 {v=}')
f2()
f1()
# f2 v='global'
To be said, using
global
and nonlocal
is considered a bad practice that complicates the code testing and usage. If you want a global state, think if it can be achieved in another way. If you desperately need a global state, consider using singleton pattern which is a little bit better.Let's learn a bit more about strings performance. What if instead of an unknown amount of strings we have only a few known variables?
No surprises here,
In this case, formatting is faster because it doesn't create intermediate strings. However, there is something else about f-strings. Let's measure how long it takes just to convert an
Wow, f-strings are twice faster than just
And once more, disclaimer: readability is more important than performance until proven otherwise. Use your knowledge with caution :)
s1 = 'hello, '
s2 = '@pythonetc'
%timeit s1+s2
# 56.7 ns ± 6.17 ns per loop
%timeit ''.join([s1, s2])
# 110 ns ± 6.09 ns per loop
%timeit '{}{}'.format(s1, s2)
# 63.3 ns ± 6.69 ns per loop
%timeit f'{s1}{s2}'
# 57 ns ± 5.43 ns per loop
No surprises here,
+
and f-strings are equally good, str.format
is quite close. But what if we have numbers instead?n1 = 123
n2 = 456
%timeit str(n1)+str(n2)
# 374 ns ± 7.09 ns per loop
%timeit '{}{}'.format(n1, n2)
# 249 ns ± 4.73 ns per loop
%timeit f'{n1}{n2}'
# 208 ns ± 3.49 ns per loop
In this case, formatting is faster because it doesn't create intermediate strings. However, there is something else about f-strings. Let's measure how long it takes just to convert an
int
into an str
:%timeit str(n1)
# 138 ns ± 4.86 ns per loop
%timeit '{}'.format(n1)
# 148 ns ± 3.49 ns per loop
%timeit format(n1, '')
# 91.8 ns ± 6.12 ns per loop
%timeit f'{n1}'
# 63.8 ns ± 6.13 ns per loop
Wow, f-strings are twice faster than just
str
! This is because f-strings are part of the grammar but str
is just a function that requires function-lookup machinery:import dis
dis.dis("f'{n1}'")
1 0 LOAD_NAME 0 (n1)
2 FORMAT_VALUE 0
4 RETURN_VALUE
dis.dis("str(n1)")
1 0 LOAD_NAME 0 (str)
2 LOAD_NAME 1 (n1)
4 CALL_FUNCTION 1
6 RETURN_VALUE
And once more, disclaimer: readability is more important than performance until proven otherwise. Use your knowledge with caution :)
Types
The type
If you're looking for reasons why there is no
str
and bytes
are immutable. As we learned in previous posts, +
is optimized for str
but sometimes you need a fairly mutable type. For such cases, there is bytearray
type. It is a "hybrid" of bytes
and list
:b = bytearray(b'hello, ')
b.extend(b'@pythonetc')
b
# bytearray(b'hello, @pythonetc')
b.upper()
# bytearray(b'HELLO, @PYTHONETC')
The type
bytearray
has all methods of both bytes
and list
except sort
:set(dir(bytearray)) ^ (set(dir(bytes)) | set(dir(list)))
# {'__alloc__', '__class_getitem__', '__getnewargs__', '__reversed__', 'sort'}
If you're looking for reasons why there is no
bytearray.sort
, there is the only answer we found: stackoverflow.com/a/22783330/8704691.Suppose, you have 10 lists:
What's the fastest way to join them into one? To have a baseline, let's just
Now, let's try to use functools.reduce. It should be about the same but cleaner and doesn't require to know in advance how many lists we have:
Good, about the same speed. However, reduce is not "pythonic" anymore, this is why it was moved from built-ins into
Short and simple. Now, can we make it faster? What if we itertools.chain everything together?
Wow, this is about 3 times faster. Can we do better? Let's try something more straightforward:
Turned out, the most straightforward and simple solution is the fastest one.
lists = [list(range(10_000)) for _ in range(10)]
What's the fastest way to join them into one? To have a baseline, let's just
+
everything together:s = lists
%timeit s[0] + s[1] + s[2] + s[3] + s[4] + s[5] + s[6] + s[7] + s[8] + s[9]
# 1.65 ms ± 25.1 µs per loop
Now, let's try to use functools.reduce. It should be about the same but cleaner and doesn't require to know in advance how many lists we have:
from functools import reduce
from operator import add
%timeit reduce(add, lists)
# 1.65 ms ± 27.2 µs per loop
Good, about the same speed. However, reduce is not "pythonic" anymore, this is why it was moved from built-ins into
functools
. The more beautiful way to do it is using sum
:%timeit sum(lists, start=[])
# 1.64 ms ± 83.8 µs per loop
Short and simple. Now, can we make it faster? What if we itertools.chain everything together?
from itertools import chain
%timeit list(chain(*lists))
# 599 µs ± 20.4 µs per loop
Wow, this is about 3 times faster. Can we do better? Let's try something more straightforward:
%%timeit
r = []
for lst in lists:
r.extend(lst)
# 250 µs ± 5.96 µs per loop
Turned out, the most straightforward and simple solution is the fastest one.
Starting Python 3.8, the interpreter warns about
Python 3.7:
Python 3.8:
The reason is that it is an infamous Python gotcha. While
is
comparison of literals.Python 3.7:
>>> 0 is 0
True
Python 3.8:
>>> 0 is 0
<stdin>:1: SyntaxWarning: "is" with a literal. Did you mean "=="?
True
The reason is that it is an infamous Python gotcha. While
==
does values comparison (which is implemented by calling __eq__
magic method, in a nutshell), is
compares memory addresses of objects. It's true for ints from -5 to 256 but it won't work for ints out of this range or for objects of other types:a = -5
a is -5 # True
a = -6
a is -6 # False
a = 256
a is 256 # True
a = 257
a is 257 # False
Floating point numbers in Python and most of the modern languages are implemented according to IEEE 754. The most interesting and hardcore part is "arithmetic formats" which defines a few special values:
+
+
+
Negative zero is the easiest case, for all operations it considered to be the same as the positive zero:
Nan returns False for all comparison operations (except
And all binary operations on nan return nan:
You can read more about nan in previous posts:
+ https://t.me/pythonetc/561
+ https://t.me/pythonetc/597
Infinity is bigger than anything else (except nan). However, unlike in pure math, infinity is equal to infinity:
The sum of positive and negative infinity is nan:
+
inf
and -inf
representing infinity.+
nan
representing a special "Not a Number" value.+
-0.0
representing "negative zero"Negative zero is the easiest case, for all operations it considered to be the same as the positive zero:
-.0 == .0 # True
-.0 < .0 # False
Nan returns False for all comparison operations (except
!=
) including comparison with inf:import math
math.nan < 10 # False
math.nan > 10 # False
math.nan < math.inf # False
math.nan > math.inf # False
math.nan == math.nan # False
math.nan != 10 # True
And all binary operations on nan return nan:
math.nan + 10 # nan
1 / math.nan # nan
You can read more about nan in previous posts:
+ https://t.me/pythonetc/561
+ https://t.me/pythonetc/597
Infinity is bigger than anything else (except nan). However, unlike in pure math, infinity is equal to infinity:
10 < math.inf # True
math.inf == math.inf # True
The sum of positive and negative infinity is nan:
-math.inf + math.inf # nan
Infinity has an interesting behavior on division operations. Some of them are expected, some of them are surprising. Without further talking, there is a table:
The code used to generate the table:
truediv (/)
| -8 | 8 | -inf | inf
-8 | 1.0 | -1.0 | 0.0 | -0.0
8 | -1.0 | 1.0 | -0.0 | 0.0
-inf | inf | -inf | nan | nan
inf | -inf | inf | nan | nan
floordiv (//)
| -8 | 8 | -inf | inf
-8 | 1 | -1 | 0.0 | -1.0
8 | -1 | 1 | -1.0 | 0.0
-inf | nan | nan | nan | nan
inf | nan | nan | nan | nan
mod (%)
| -8 | 8 | -inf | inf
-8 | 0 | 0 | -8.0 | inf
8 | 0 | 0 | -inf | 8.0
-inf | nan | nan | nan | nan
inf | nan | nan | nan | nan
The code used to generate the table:
import operator
cases = (-8, 8, float('-inf'), float('inf'))
ops = (operator.truediv, operator.floordiv, operator.mod)
for op in ops:
print(op.__name__)
row = ['{:4}'.format(x) for x in cases]
print(' ' * 6, ' | '.join(row))
for x in cases:
row = ['{:4}'.format(x)]
for y in cases:
row.append('{:4}'.format(op(x, y)))
print(' | '.join(row))
PEP-589 (landed in Python 3.8) introduced
It cannot have keys that aren't explicitly specified in the type:
Also, all specified keys are required by default but it can be changed by passing
typing.TypedDict
as a way to annotate dicts:from typing import TypedDict
class Movie(TypedDict):
name: str
year: int
movie: Movie = {
'name': 'Blade Runner',
'year': 1982,
}
It cannot have keys that aren't explicitly specified in the type:
movie: Movie = {
'name': 'Blade Runner',
'year': 1982,
'director': 'Ridley Scott', # fails type checking
}
Also, all specified keys are required by default but it can be changed by passing
total=False
:movie: Movie = {} # fails type checking
class Movie2(TypedDict, total=False):
name: str
year: int
movie2: Movie2 = {} # ok
PEP-526, introducing syntax for variable annotations (laded in Python 3.6), allows annotating any valid assignment target:
The last line is the most interesting one. Adding annotations to an expression suppresses its execution:
Despite being a part of the PEP, it's not supported by mypy:
c.x: int = 0
c.y: int
d = {}
d['a']: int = 0
d['b']: int
The last line is the most interesting one. Adding annotations to an expression suppresses its execution:
d = {}
# fails
d[1]
# KeyError: 1
# nothing happens
d[1]: 1
Despite being a part of the PEP, it's not supported by mypy:
$ cat tmp.py
d = {}
d['a']: int
d['b']: str
reveal_type(d['a'])
reveal_type(d['b'])
$ mypy tmp.py
tmp.py:2: error: Unexpected type declaration
tmp.py:3: error: Unexpected type declaration
tmp.py:4: note: Revealed type is 'Any'
tmp.py:5: note: Revealed type is 'Any'
In most of the programming languages (like C, PHP, Go, Rust) values can be passed into a function either as value or as reference (pointer):
+ Call by value means that the value of the variable is copied, so all modification with the argument value inside the function won't affect the original value. This is an example of how it works in Go:
+ Call by reference means that all modifications that are done by the function, including reassignment, will modify the original value:
So, which one is used in Python? Well, neither.
In Python, the caller and the function share the same value:
However, the function can't replace the value (reassign the variable):
This approach is called Call by sharing. That means the argument is always passed into a function as a copy of the pointer. So, both variables point to the same boxed object in memory but if the pointer itself is modified inside the function, it doesn't affect the caller code.
+ Call by value means that the value of the variable is copied, so all modification with the argument value inside the function won't affect the original value. This is an example of how it works in Go:
package main
func f(v2 int) {
v2 = 2
println("f v2:", v2)
// Output: f v2: 2
}
func main() {
v1 := 1
f(v1)
println("main v1:", v1)
// Output: main v1: 1
}
+ Call by reference means that all modifications that are done by the function, including reassignment, will modify the original value:
package main
func f(v2 *int) {
*v2 = 2
println("f v2:", *v2)
// Output: f v2: 2
}
func main() {
v1 := 1
f(&v1)
println("main v1:", v1)
// Output: main v1: 2
}
So, which one is used in Python? Well, neither.
In Python, the caller and the function share the same value:
def f(v2: list):
v2.append(2)
print('f v2:', v2)
# f v2: [1, 2]
v1 = [1]
f(v1)
print('v1:', v1)
# v1: [1, 2]
However, the function can't replace the value (reassign the variable):
def f(v2: int):
v2 = 2
print('f v2:', v2)
# f v2: 2
v1 = 1
f(v1)
print('v1:', v1)
# v1: 1
This approach is called Call by sharing. That means the argument is always passed into a function as a copy of the pointer. So, both variables point to the same boxed object in memory but if the pointer itself is modified inside the function, it doesn't affect the caller code.
What if we want to modify a collection inside a function but don't want these modifications to affect the caller code? Then we should explicitly copy the value.
For this purpose, all mutable built-in collections provide method
Custom objects (and built-in collections too) can be copied using copy.copy:
However,
So, if you need to copy all subobjects recursively, use,
For this purpose, all mutable built-in collections provide method
.copy
:def f(v2):
v2 = v2.copy()
v2.append(2)
print(f'{v2=}')
# v2=[1, 2]
v1 = [1]
f(v1)
print(f'{v1=}')
# v1=[1]
Custom objects (and built-in collections too) can be copied using copy.copy:
import copy
class C:
pass
def f(v2: C):
v2 = copy.copy(v2)
v2.p = 2
print(f'{v2.p=}')
# v2.p=2
v1 = C()
v1.p = 1
f(v1)
print(f'{v1.p=}')
# v1.p=1
However,
copy.copy
copies only the object itself but not underlying objects:v1 = [[1]]
v2 = copy.copy(v1)
v2.append(2)
v2[0].append(3)
print(f'{v1=}, {v2=}')
# v1=[[1, 3]], v2=[[1, 3], 2]
So, if you need to copy all subobjects recursively, use,
copy.deepcopy
:v1 = [[1]]
v2 = copy.deepcopy(v1)
v2[0].append(2)
print(f'{v1=}, {v2=}')
# v1=[[1]], v2=[[1, 2]]
Python uses eager evaluation. When a function is called, all its arguments are evaluated from left to right and only then their results are passed into the function:
Operators
For mathematical operators, the precedence is how it is in math:
The most interesting case is operator
print(print(1) or 2, print(3) or 4)
# 1
# 3
# 2 4
Operators
and
and or
are lazy, the right value is evaluated only if needed (for or
if the left value is falsy, and for and
if the left value is truthy):print(1) or print(2) and print(3)
# 1
# 2
For mathematical operators, the precedence is how it is in math:
1 + 2 * 3
# 7
The most interesting case is operator
**
(power) which is (supposedly, the only thing in Python which is) evaluated from right to left:2 ** 3 ** 4 == 2 ** (3 ** 4)
# True
Most of the exceptions raised from the standard library or built-ins have a quite descriptive self-contained message:
However,
So, if you log an exception as a string, make sure you save the class name (and the traceback) as well, or at least use
try:
[][0]
except IndexError as e:
exc = e
exc.args
# ('list index out of range',)
However,
KeyError
is different: instead of a user-friendly error message it contains the key which is missed:try:
{}[0]
except KeyError as e:
exc = e
exc.args
# (0,)
So, if you log an exception as a string, make sure you save the class name (and the traceback) as well, or at least use
repr
instead of str
:repr(exc)
# 'KeyError(0)'
When something fails, usually you want to log it. Let's have a look at a small toy example:
This example has a few issues:
+ There is no explicit log message. So, when it fails, you can't search in the project where this log record comes from.
+ There is no traceback. When the
So, this is how we can do it better:
Also, the logger provides a convenient method
from logging import getLogger
logger = getLogger(__name__)
channels = {}
def update_channel(slug, name):
try:
old_name = channels[slug]
except KeyError as exc:
logger.error(repr(exc))
...
update_channel('pythonetc', 'Python etc')
# Logged: KeyError('pythonetc')
This example has a few issues:
+ There is no explicit log message. So, when it fails, you can't search in the project where this log record comes from.
+ There is no traceback. When the
try
block execution is more complicated, we want to be able to track where exactly in the call stack the exception occurred. To achieve it, logger methods provide exc_info
argument. When it is set to True
, the current exception with traceback will be added to the log message.So, this is how we can do it better:
def update_channel(slug, name):
try:
old_name = channels[slug]
except KeyError as exc:
logger.error('channel not found', exc_info=True)
...
update_channel('pythonetc', 'Python etc')
# channel not found
# Traceback (most recent call last):
# File "...", line 3, in update_channel
# old_name = channels[slug]
# KeyError: 'pythonetc'
Also, the logger provides a convenient method
exception
which is the same as error
with exc_info=True
:logger.exception('channel not found')
Let's have a look at the following log message:
When this message is logged, it can be hard based on it alone to reproduce the given situation, to understand what went wrong. So, it's good to provide some additional context. For example:
That's better, now we know what user it was. However, it's hard to work with such kinds of messages. For example, we want to get a notification when the same type of error messages occurred too many times in a minute. Before, it was one error message, "user not found". Now, for every user, we get a different message. Or another example, if we want to get all messages related to the same user. If we just search for "13", we will get many false positives where "13" means something else, not
The solution is to use structured logging. The idea of structured logging is to store all additional values as separate fields instead of mixing everything in one text message. In Python, it can be achieved by passing the variables as the
However, the default formatter doesn't show
So, if you use
import logging
logger = logging.getLogger(__name__)
logger.warning('user not found')
# user not found
When this message is logged, it can be hard based on it alone to reproduce the given situation, to understand what went wrong. So, it's good to provide some additional context. For example:
user_id = 13
logger.warning(f'user #{user_id} not found')
That's better, now we know what user it was. However, it's hard to work with such kinds of messages. For example, we want to get a notification when the same type of error messages occurred too many times in a minute. Before, it was one error message, "user not found". Now, for every user, we get a different message. Or another example, if we want to get all messages related to the same user. If we just search for "13", we will get many false positives where "13" means something else, not
user_id
.The solution is to use structured logging. The idea of structured logging is to store all additional values as separate fields instead of mixing everything in one text message. In Python, it can be achieved by passing the variables as the
extra
argument. Most of the logging libraries will recognize and store everything passed into extra
. For example, how it looks like in python-json-logger:from pythonjsonlogger import jsonlogger
logger = logging.getLogger()
handler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('user not found', extra=dict(user_id=13))
# {"message": "user not found", "user_id": 13}
However, the default formatter doesn't show
extra
:logger = logging.getLogger()
logger.warning('user not found', extra=dict(user_id=13))
# user not found
So, if you use
extra
, stick to the third-party formatter you use or write your own.Multiline string literal preserves every symbol between opening and closing quotes, including indentation:
A possible solution is to remove indentation, Python will still correctly parse the code:
However, it's difficult to read because it looks like the literal is outside of the function body but it's not. So, a much better solution is not to break the indentation but instead remove it from the string content using textwrap.dedent:
def f():
return """
hello
world
"""
f()
# '\n hello\n world\n '
A possible solution is to remove indentation, Python will still correctly parse the code:
def f():
return """
hello
world
"""
f()
# '\nhello\n world\n'
However, it's difficult to read because it looks like the literal is outside of the function body but it's not. So, a much better solution is not to break the indentation but instead remove it from the string content using textwrap.dedent:
from textwrap import dedent
def f():
return dedent("""
hello
world
""")
f()
# '\nhello\n world\n'