β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Access control
1) Each file in the directory has a unique file owner, which has most access permissions, and can also authorize and revoke files.
2) In order to prevent forged access to files, the system does not allow any user to write to the file directory. All file directories can only be maintained through the operating system controlled by the main file command. Users can perform reasonable directory operations through the system, but users are prohibited from directly accessing the directories.
3) Access Control List An access control list is a type of data structure used to record all subjects and access methods that can access the entity.
Each entity corresponds to an access control table, which lists all the subjects and access methods that can access the entity.
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Access control
1) Each file in the directory has a unique file owner, which has most access permissions, and can also authorize and revoke files.
2) In order to prevent forged access to files, the system does not allow any user to write to the file directory. All file directories can only be maintained through the operating system controlled by the main file command. Users can perform reasonable directory operations through the system, but users are prohibited from directly accessing the directories.
3) Access Control List An access control list is a type of data structure used to record all subjects and access methods that can access the entity.
Each entity corresponds to an access control table, which lists all the subjects and access methods that can access the entity.
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦basic Shell commands
1) Basic commands
View relevant Shell operation commands under Hadoop.
[root@hop01 hadoop2.7]# bin/hadoop fs
[root@hop01 hadoop2.7]# bin/hdfs dfs
dfs is the implementation class of fs
2) View the command description
[root@hop01 hadoop2.7]# hadoop fs -help ls
3) Recursively create directories
[root@hop01 hadoop2.7]# hadoop fs -mkdir -p /hopdir/myfile
4) View the catalog
[root@hop01 hadoop2.7]# hadoop fs -ls /
[root@hop01 hadoop2.7]# hadoop fs -ls /hopdir
5) Cut and paste files
hadoop fs -moveFromLocal /opt/hopfile/java.txt /hopdir/myfile
hadoop fs -ls /hopdir/myfile
6) ew file content
hadoop fs -cat /hopdir/myfile/java.txt
hadoop fs -tail /hopdir/myfile/java.txt
7) Append file content
hadoop fs -appendToFile /opt/hopfile/c++.txt /hopdir/myfile/java.txt
8) Copy files
The copyFromLocal command is the same as the put command
hadoop fs -copyFromLocal /opt/hopfile/c++.txt /hopdir
9) Copy HDFS files to local
hadoop fs -copyToLocal /hopdir/myfile/java.txt /opt/hopfile/
10) Copy files in HDFS
hadoop fs -cp /hopdir/myfile/java.txt /hopdir
11) Move files in HDFS
hadoop fs -mv /hopdir/c++.txt /hopdir/myfile
12) Merge and download multiple files
The basic commands get and copyToLocal commands have the same effect.
hadoop fs -getmerge /hopdir/myfile/* /opt/merge.txt
13) delete files
hadoop fs -rm /hopdir/myfile/java.txt
14) View folder information
hadoop fs -du -s -h /hopdir/myfile
15) delete the folder
bin/hdfs dfs -rm -r /hopdir/file0703
written by
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦basic Shell commands
1) Basic commands
View relevant Shell operation commands under Hadoop.
[root@hop01 hadoop2.7]# bin/hadoop fs
[root@hop01 hadoop2.7]# bin/hdfs dfs
dfs is the implementation class of fs
2) View the command description
[root@hop01 hadoop2.7]# hadoop fs -help ls
3) Recursively create directories
[root@hop01 hadoop2.7]# hadoop fs -mkdir -p /hopdir/myfile
4) View the catalog
[root@hop01 hadoop2.7]# hadoop fs -ls /
[root@hop01 hadoop2.7]# hadoop fs -ls /hopdir
5) Cut and paste files
hadoop fs -moveFromLocal /opt/hopfile/java.txt /hopdir/myfile
hadoop fs -ls /hopdir/myfile
6) ew file content
hadoop fs -cat /hopdir/myfile/java.txt
hadoop fs -tail /hopdir/myfile/java.txt
7) Append file content
hadoop fs -appendToFile /opt/hopfile/c++.txt /hopdir/myfile/java.txt
8) Copy files
The copyFromLocal command is the same as the put command
hadoop fs -copyFromLocal /opt/hopfile/c++.txt /hopdir
9) Copy HDFS files to local
hadoop fs -copyToLocal /hopdir/myfile/java.txt /opt/hopfile/
10) Copy files in HDFS
hadoop fs -cp /hopdir/myfile/java.txt /hopdir
11) Move files in HDFS
hadoop fs -mv /hopdir/c++.txt /hopdir/myfile
12) Merge and download multiple files
The basic commands get and copyToLocal commands have the same effect.
hadoop fs -getmerge /hopdir/myfile/* /opt/merge.txt
13) delete files
hadoop fs -rm /hopdir/myfile/java.txt
14) View folder information
hadoop fs -du -s -h /hopdir/myfile
15) delete the folder
bin/hdfs dfs -rm -r /hopdir/file0703
written by
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦MySQL data is synchronized to ES search engine in full and incremental mode :
#ProTips
configuration full text by Undercode
/usr/local/logstash/sync-config/cicadaes.conf
input {
stdin {}
jdbc {
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8"
jdbc_user => "root"
jdbc_password => "root123"
jdbc_driver_library => "/usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
jdbc_default_timezone => "Asia/Shanghai"
statement_filepath => "/usr/local/logstash/sync-config/user_sql.sql"
schedule => "* * * * *"
type => "User"
lowercase_column_names => false
record_last_run => true
use_column_value => true
tracking_column => "updateTime"
tracking_column_type => "timestamp"
last_run_metadata_path => "/usr/local/logstash/sync-config/user_last_time"
clean_run => false
}
jdbc {
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8"
jdbc_user => "root"
jdbc_password => "root123"
jdbc_driver_library => "/usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
jdbc_default_timezone => "Asia/undercode"
statement_filepath => "/usr/local/logstash/sync-config/log_sql.sql"
schedule => "* * * * *"
type => "Log"
lowercase_column_names => false
record_last_run => true
use_column_value => true
tracking_column => "updateTime"
tracking_column_type => "timestamp"
last_run_metadata_path => "/usr/local/logstash/sync-config/log_last_time"
clean_run => false
}
}
filter {
json {
source => "message"
remove_field => ["message"]
}
}
output {
if [type] == "User" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "cicada_user_search"
document_type => "user_search_index"
}
}
if [type] == "Log" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "cicada_log_search"
document_type => "log_search_index"
}
}
}
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦MySQL data is synchronized to ES search engine in full and incremental mode :
#ProTips
configuration full text by Undercode
/usr/local/logstash/sync-config/cicadaes.conf
input {
stdin {}
jdbc {
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8"
jdbc_user => "root"
jdbc_password => "root123"
jdbc_driver_library => "/usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
jdbc_default_timezone => "Asia/Shanghai"
statement_filepath => "/usr/local/logstash/sync-config/user_sql.sql"
schedule => "* * * * *"
type => "User"
lowercase_column_names => false
record_last_run => true
use_column_value => true
tracking_column => "updateTime"
tracking_column_type => "timestamp"
last_run_metadata_path => "/usr/local/logstash/sync-config/user_last_time"
clean_run => false
}
jdbc {
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/cicada?characterEncoding=utf8"
jdbc_user => "root"
jdbc_password => "root123"
jdbc_driver_library => "/usr/local/logstash/sync-config/mysql-connector-java-5.1.13.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
jdbc_default_timezone => "Asia/undercode"
statement_filepath => "/usr/local/logstash/sync-config/log_sql.sql"
schedule => "* * * * *"
type => "Log"
lowercase_column_names => false
record_last_run => true
use_column_value => true
tracking_column => "updateTime"
tracking_column_type => "timestamp"
last_run_metadata_path => "/usr/local/logstash/sync-config/log_last_time"
clean_run => false
}
}
filter {
json {
source => "message"
remove_field => ["message"]
}
}
output {
if [type] == "User" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "cicada_user_search"
document_type => "user_search_index"
}
}
if [type] == "Log" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "cicada_log_search"
document_type => "log_search_index"
}
}
}
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Linux system: Build Rocketmq4.3 middleware under centos7, configure monitoring station :
1) Download the installation package
URL
https://www.apache.org/dyn/closer.cgi?path=rocketmq/4.3.2/rocketmq-all-4.3.2-bin-release.zip
# We suggest the following mirror site for your download
http://mirrors.tuna.tsinghua.edu.cn/apache/rocketmq/4.3.2/rocketmq-all-4.3.2-bin-release.zip
2) Upload files
[root@localhost mysoft]# pwd
/usr/local/mysoft
[root@localhost mysoft]# unzip rocketmq-all-4.3.2-bin-release.zip
[root@localhost mysoft]# mv rocketmq-all-4.3.2-bin-release rocket4.3
[root@localhost mysoft]# rm -f rocketmq-all-4.3.2-bin-release.zip
3) Modify the relevant configuration
The default configuration of rocketmq is extremely memory intensive and needs to be modified.
1) Modify the runserver.sh configuration,
comment out the original, and add a new configuration
[root@localhost bin]# vim runserver.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
2) Modify the configuration of runbroker.sh,
comment out the original and add a new configuration
[root@localhost bin]# vim runbroker.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn128m"
3) Modify the tools.sh configuration,
comment out the original and add a new configuration
[root@localhost bin]# vim tools.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms1g -Xmx1g -Xmn256m -XX:PermSize=128m -XX:MaxPermSize=128m"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn256m -XX:PermSize=128m -XX:MaxPermSize=128m"
4) Start the service
To start in order
nohup sh /usr/local/mysoft/rocket4.3/bin/mqnamesrv
nohup sh /usr/local/mysoft/rocket4.3/bin/mqbroker -n localhost:9876
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Linux system: Build Rocketmq4.3 middleware under centos7, configure monitoring station :
1) Download the installation package
URL
https://www.apache.org/dyn/closer.cgi?path=rocketmq/4.3.2/rocketmq-all-4.3.2-bin-release.zip
# We suggest the following mirror site for your download
http://mirrors.tuna.tsinghua.edu.cn/apache/rocketmq/4.3.2/rocketmq-all-4.3.2-bin-release.zip
2) Upload files
[root@localhost mysoft]# pwd
/usr/local/mysoft
[root@localhost mysoft]# unzip rocketmq-all-4.3.2-bin-release.zip
[root@localhost mysoft]# mv rocketmq-all-4.3.2-bin-release rocket4.3
[root@localhost mysoft]# rm -f rocketmq-all-4.3.2-bin-release.zip
3) Modify the relevant configuration
The default configuration of rocketmq is extremely memory intensive and needs to be modified.
1) Modify the runserver.sh configuration,
comment out the original, and add a new configuration
[root@localhost bin]# vim runserver.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
2) Modify the configuration of runbroker.sh,
comment out the original and add a new configuration
[root@localhost bin]# vim runbroker.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn128m"
3) Modify the tools.sh configuration,
comment out the original and add a new configuration
[root@localhost bin]# vim tools.sh
#JAVA_OPT="${JAVA_OPT} -server -Xms1g -Xmx1g -Xmn256m -XX:PermSize=128m -XX:MaxPermSize=128m"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn256m -XX:PermSize=128m -XX:MaxPermSize=128m"
4) Start the service
To start in order
nohup sh /usr/local/mysoft/rocket4.3/bin/mqnamesrv
nohup sh /usr/local/mysoft/rocket4.3/bin/mqbroker -n localhost:9876
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
www.apache.org
Apache Download Mirrors
Home page of The Apache Software Foundation
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦How to install Social Engineering Toolkit in Termux ?
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) pkg update && pkg upgrade -y
2) apt install curl -y
3) curl -LO https://raw.githubusercontent.com/Hax4us/setoolkit/master/setoolkit.sh
4) sh setoolkit.sh
5) cd setoolkit
6) ./setup.py install
7) ./setoolkit
#fastTips
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦How to install Social Engineering Toolkit in Termux ?
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) pkg update && pkg upgrade -y
2) apt install curl -y
3) curl -LO https://raw.githubusercontent.com/Hax4us/setoolkit/master/setoolkit.sh
4) sh setoolkit.sh
5) cd setoolkit
6) ./setup.py install
7) ./setoolkit
#fastTips
β β β Uππ»βΊπ«Δπ¬πβ β β β