In python when you open a file using
For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:
#python #file #read #readline #efficiency
open
command the your can read the content of the file. read
will read the whole content of the file at once, while readline
reads the content of the file line by line.NOTE:
if file is huge, read()
is definitely a bad idea, as it loads (without size parameter), whole file into memory.NOTE:
it is good practice to use the with
keyword when dealing with file objects. The advantage is that the file is properly closed after its suite finishes, even if an exception is raised at some point. (We have reviewed with
in depth a couple days ago)NOTE:
read
function gets a size
parameter that specifies the chunks read from a file. If the end of the file has been reached, f.read()
will return an empty string.For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:
>>>
>>> for line in f:
... print(line, end='')
...
This is the first line of the file.
Second line of the file
#python #file #read #readline #efficiency
MongoDB
has a top utility like top
linux command that displays how much time spent on read, write and total on every name space (collection).To run
mongotop
you just need to run:mongotop
The output is something like below:
root@hs-1:~# mongotop
2018-01-09T13:42:42.177+0000 connected to: 127.0.0.1
ns total read write 2018-01-09T13:42:43Z
users.profile 28ms 28ms 0ms
authz.tokens 7ms 7ms 0ms
mielin.obx 3ms 3ms 0ms
conduc.contacts 1ms 1ms 0ms
admin.system.roles 0ms 0ms 0ms
The above query will run every second, to increase the interval use
mongotop YOUR_INTERVAL_INSECOND
.If you want the result in json use
mongotop --json
.If you want to return the result once and exit use
mongotop --row-count
#mongodb #mongo #mongotop #read #write
See live disk IO status by using
The output has many columns. The part I'm interested in for now is
second. To see size per second in read and write see columns
#linux #debian #iostat #read_per_second #write_per_second #sysstat
iostat
:iostat -dx 1
The output has many columns. The part I'm interested in for now is
r/s
which refers to read per second and w/s
which is write persecond. To see size per second in read and write see columns
rkB/s
, wkB/s
in their corresponding order.NOTE:
if you don't have iostat on your linux os install it on debian by issuing apt-get install sysstat
command.#linux #debian #iostat #read_per_second #write_per_second #sysstat
Elasticsearch
gives below error:Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
This error may happen when server storage is totally full and
elasticsearch
puts your indexes in read only mode. If you have enoughspace now and are sure there is no other matter for elasticsearch and it behaves normally, remove read only mode from index block:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.monitoring-*/_settings -d '{"index.blocks. read_only_allow_delete": null}'
#elasticsearch #read_only #index #cluster_block_exception