https://stackoverflow.com/questions/4309156/commit-specific-lines-of-a-file-to-git?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
#git #commit
#git #commit
Stack Overflow
Commit specific lines of a file to git
Possible Duplicate:
How can I commit only part of a file in git
How do I commit a few specific line ranges from a file to git? while ignoring some other line changes in the same file.
How can I commit only part of a file in git
How do I commit a few specific line ranges from a file to git? while ignoring some other line changes in the same file.
Run newest
elasticsearch
image on linux
using docker
:run -d -p 9200:9200 -v /srv/esdata:/usr/share/elasticsearch/data -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/ elasticsearch/elasticsearch:6.2.4#docker #es #elasticsearch
Access an application on remote machine without having access to the port from your browser. Sometimes when there is firewalls that block all ports to the outside world or any other reasons, you can to port forwarding from remote machine to local machine in order to be abke to see the application UI. For solving this problem you can use
This allows anyone on the remote server to connect to TCP port 5601 on the remote server. The connection will then be tunneled back to the client host, and the client then makes a TCP connection to port 8085 on localhost. Any other host name or IP address could be used instead of localhost to specify the host to connect to.
Now if you head over to your browser you can enter URL
#linux #ssh #port_forwarding #forwarding #remote_forwarding
ssh
for port forwarding
:ssh -L 5601:localhost:8085 YOUR_HOST
This allows anyone on the remote server to connect to TCP port 5601 on the remote server. The connection will then be tunneled back to the client host, and the client then makes a TCP connection to port 8085 on localhost. Any other host name or IP address could be used instead of localhost to specify the host to connect to.
Now if you head over to your browser you can enter URL
localhost:8085
to see the remote application.#linux #ssh #port_forwarding #forwarding #remote_forwarding
Elasticsearch
gives below error:Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
This error may happen when server storage is totally full and
elasticsearch
puts your indexes in read only mode. If you have enoughspace now and are sure there is no other matter for elasticsearch and it behaves normally, remove read only mode from index block:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.monitoring-*/_settings -d '{"index.blocks. read_only_allow_delete": null}'
#elasticsearch #read_only #index #cluster_block_exception
http://www.cagrimmett.com/til/2016/07/07/autofill-google-forms.html
#google #google_forms #prefill #prefill_google_forms
#google #google_forms #prefill #prefill_google_forms
Chuck Grimmett's blog
How to Pre-fill Google Forms
Did you know that you can pre-fill Google Forms based on a URL? Did you also know that automate it with a database and send personalized forms via services l...
Delete
#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
elasticsearch
indexes older than 1 month:#!/bin/bash
last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''
NOTE:
asterisk in curl command will be anything in between of myindex_
and -20180520
. For example myindex_module1-20180520
.#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
Get specific date like 2 days ago by bash script:
#linux #date #bash_script #bash
#!/bin/bash
specific_date=`date --date="3 day ago" +%Y%m%d`
echo $specific_date
#linux #date #bash_script #bash
https://unix.stackexchange.com/questions/31414/how-can-i-pass-a-command-line-argument-into-a-shell-script
#shell #argument #pass_argument #command_line #terminal #linux #bash #script
#shell #argument #pass_argument #command_line #terminal #linux #bash #script
Unix & Linux Stack Exchange
How can I pass a command line argument into a shell script?
I know that shell scripts just run commands as if they were executed in at the command prompt. I'd like to be able to run shell scripts as if they were functions... That is, taking an input value or
https://dba.stackexchange.com/questions/41050/is-it-safe-to-delete-mysql-bin-files
#mysql #mysql_bin #bin #bin_file #purge
#mysql #mysql_bin #bin #bin_file #purge
Database Administrators Stack Exchange
Is it safe to delete mysql-bin files?
I have MM Replication in mysql, and I want to squeeze some free space in the box be deleting unnecessary files, I came across these mysql-bin files inside /var/db/mysql/ There are hundreds of those...
Write pandas dataframe into Google bigQuery:
https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.to_gbq.html#pandas-dataframe-to-gbq
#pandas #bg #bigquery
https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.to_gbq.html#pandas-dataframe-to-gbq
#pandas #bg #bigquery
Months ago we have talked about how to get mongoDB data changes. THe problem with that article was that if for any reason your script
was stopped you will lose the data in the downtime period.
Now we have a new solution that you will read from the point in time that have read last time. MongoDB uses bson Timestamp in order for its internal usage like replication oplog logs. We can use the same Timestamp and store it somewhere to read from the exact point
that we have read last time.
In python you can import it like below:
Now to read data from that point read that time stamp from where you have saved it and query the oplog from that point:
After traversing cursors and catching mongoDB changes you can store the new timestamp that resides in
Now use a
If you remember from before we got last changes by the query below:
We read the last ts and read from the last record, that's why we were missing data.
#mongodb #mongo #replication #oplog #timestamp #cursor
was stopped you will lose the data in the downtime period.
Now we have a new solution that you will read from the point in time that have read last time. MongoDB uses bson Timestamp in order for its internal usage like replication oplog logs. We can use the same Timestamp and store it somewhere to read from the exact point
that we have read last time.
In python you can import it like below:
from bson.timestamp import Timestamp
Now to read data from that point read that time stamp from where you have saved it and query the oplog from that point:
ts = YOUR_TIMESTAMP_HERE
cursor = oplog.find({'ts': {'$gt': ts}},
cursor_type=pymongo.CursorType.TAILABLE_AWAIT,
oplog_replay=True)
After traversing cursors and catching mongoDB changes you can store the new timestamp that resides in
ts
field in the document you have fetched from MongoDB oplog.Now use a
while True
and read data until cursor is alive. The point of this post is that you can store ts somewhere and read from the point you have stored ts.If you remember from before we got last changes by the query below:
last = oplog.find().sort('$natural', pymongo.DESCENDING).limit(1).next()
ts = last['ts']
We read the last ts and read from the last record, that's why we were missing data.
#mongodb #mongo #replication #oplog #timestamp #cursor
There is always a risk and also a problem when altering a production mySQL table.
Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
Percona
has released a toolkit that contains a command called pt-online-schema-change
. It will change table schema live on production without downtime.Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
pt-online-schema-change
for your table alteration.#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
In order to dry run before the real execution use
Now after dry run you can execute the alter command:
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
--dry-run
:pt-online-schema-change --dry-run h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
Now after dry run you can execute the alter command:
pt-online-schema-change --execute h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command
#mysql #trigger #percona #online_schema_change
#mysql #trigger #percona #online_schema_change
Database Administrators Stack Exchange
Problem with aborted PT-online-schema change command
I aborted a pt-online-schema change command to change a table definition. Now, when I run pt-online-schema change again, I get this error:
The table . has trigge...
The table . has trigge...
Tech C**P
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command #mysql #trigger #percona #online_schema_change
In order to add multiple columns in alter table command use ONE
#percona
--alter
and comma separate ADD COLUMNS
statements.#percona