Tech C**P
15 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
Run newest elasticsearch image on linux using docker:

run -d -p 9200:9200 -v /srv/esdata:/usr/share/elasticsearch/data -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/     elasticsearch/elasticsearch:6.2.4
#docker #es #elasticsearch
Access an application on remote machine without having access to the port from your browser. Sometimes when there is firewalls that block all ports to the outside world or any other reasons, you can to port forwarding from remote machine to local machine in order to be abke to see the application UI. For solving this problem you can use ssh for port forwarding:

ssh -L 5601:localhost:8085 YOUR_HOST

This allows anyone on the remote server to connect to TCP port 5601 on the remote server. The connection will then be tunneled back to the client host, and the client then makes a TCP connection to port 8085 on localhost. Any other host name or IP address could be used instead of localhost to specify the host to connect to.

Now if you head over to your browser you can enter URL localhost:8085 to see the remote application.

#linux #ssh #port_forwarding #forwarding #remote_forwarding
Elasticsearch gives below error:

Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by:   [FORBIDDEN/12/index read-only / allow delete (api)];

This error may happen when server storage is totally full and elasticsearch puts your indexes in read only mode. If you have enough
space now and are sure there is no other matter for elasticsearch and it behaves normally, remove read only mode from index block:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.monitoring-*/_settings -d '{"index.blocks.                       read_only_allow_delete": null}'

#elasticsearch #read_only #index #cluster_block_exception
Delete elasticsearch indexes older than 1 month:

#!/bin/bash

last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''

NOTE: asterisk in curl command will be anything in between of myindex_ and -20180520. For example myindex_module1-20180520.

#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
Get specific date like 2 days ago by bash script:

#!/bin/bash
specific_date=`date --date="3 day ago" +%Y%m%d`
echo $specific_date

#linux #date #bash_script #bash
In order to run a mySQL query from command line you can use -e parameter:

mysql -u <user> -p -e "select * from schema.table"

#mysql #command #query
How to check if a field exists in MongoDB and it's value is not empty?

db.users.find({ profile_image: {$exists: 1, $ne: ""}  }, { profile_image:1 })

NOTE: $ne makes sure that field is not empty and $exists check whether field exist or not.


#mongodb #mongo #find #exists #ne
Months ago we have talked about how to get mongoDB data changes. THe problem with that article was that if for any reason your script
was stopped you will lose the data in the downtime period.

Now we have a new solution that you will read from the point in time that have read last time. MongoDB uses bson Timestamp in order for its internal usage like replication oplog logs. We can use the same Timestamp and store it somewhere to read from the exact point
that we have read last time.

In python you can import it like below:

from bson.timestamp import Timestamp


Now to read data from that point read that time stamp from where you have saved it and query the oplog from that point:

ts = YOUR_TIMESTAMP_HERE
cursor = oplog.find({'ts': {'$gt': ts}},
cursor_type=pymongo.CursorType.TAILABLE_AWAIT,
oplog_replay=True)

After traversing cursors and catching mongoDB changes you can store the new timestamp that resides in ts field in the document you have fetched from MongoDB oplog.

Now use a while True and read data until cursor is alive. The point of this post is that you can store ts somewhere and read from the point you have stored ts.


If you remember from before we got last changes by the query below:

last = oplog.find().sort('$natural', pymongo.DESCENDING).limit(1).next()
ts = last['ts']


We read the last ts and read from the last record, that's why we were missing data.

#mongodb #mongo #replication #oplog #timestamp #cursor
There is always a risk and also a problem when altering a production mySQL table. Percona has released a toolkit that contains a command called pt-online-schema-change. It will change table schema live on production without downtime.

Installation steps on Debian:

1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

3- sudo apt-get update

4- sudo apt-get install percona-toolkit

Now you have percona toolkit on your Debian server. Use the command pt-online-schema-change for your table alteration.

#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
In order to dry run before the real execution use --dry-run:

pt-online-schema-change --dry-run  h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"


Now after dry run you can execute the alter command:

pt-online-schema-change --execute  h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"

#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
Tech C**P
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command #mysql #trigger #percona #online_schema_change
In order to add multiple columns in alter table command use ONE --alter and comma separate ADD COLUMNS statements.

#percona