β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The Zero Day Initiative (ZDI) has disclosed a use-after-free memory corruption
error in the proftpd Response API code, via the security@proftpd.org address:
-- VULNERABILITY DETAILS ------------------------
1) This vulnerability is located within the ProFTPd daemon and occurs due to the way the server manages pools that are used for responses send by the server to
the client.
2) When attempting to handle an exceptional condition the server will
fail to restore a pointer that is used to contain an ftp response, and as such can be used to trigger a controlled memory corruption.
3) The core of this vulnerability is described in the following function which is located in src/main.c. The pr_cmd_dispatch_phase function is responsible for dispatching calls to any of the commands that are registered in the proftpd
modules/ list.
4) Upon entry of this function, the server essentially pushes the
state of the resp_pool for it to be restored upon return. However, if an error occurs while executing a precmd the server will fail to restore the state.
5) These are done with the pr_response_get_pool() and pr_response_set_pool(...)
functions.
src/main.c:659
int pr_cmd_dispatch_phase(cmd_rec *cmd, int phase, int flags) {
char *cp = NULL;
int success = 0;
pool *resp_pool = NULL; // XXX
...
/* Get any previous pool that may be being used by the Response API.
*
* In most cases, this will be NULL. However, if proftpd is in the
* midst of a data transfer when a command comes in on the control
* connection, then the pool in use will be that of the data transfer
* instigating command. We want to stash that pool, so that after this
* command is dispatched, we can return the pool of the old command.
* Otherwise, Bad Things (segfaults) happen.
*/
resp_pool = pr_response_get_pool(); // XXX: local that's cmd->pool
/* Set the pool used by the Response API for this command. */
pr_response_set_pool(cmd->pool); // XXX
...
if (phase == 0) {
/* First, dispatch to wildcard PRE_CMD handlers. */
success = _dispatch(cmd, PRE_CMD, FALSE, C_ANY);
if (!success) /* run other pre_cmd */
success = _dispatch(cmd, PRE_CMD, FALSE, NULL);
if (success < 0) {
/* Dispatch to POST_CMD_ERR handlers as well. */
_dispatch(cmd, POST_CMD_ERR, FALSE, C_ANY);
_dispatch(cmd, POST_CMD_ERR, FALSE, NULL);
_dispatch(cmd, LOG_CMD_ERR, FALSE, C_ANY);
_dispatch(cmd, LOG_CMD_ERR, FALSE, NULL);
pr_response_flush(&resp_err_list);
return success; // XXX
}
...
} else {
switch (phase) {
case PRE_CMD:
case POST_CMD:
case POST_CMD_ERR:
success = _dispatch(cmd, phase, FALSE, C_ANY);
if (!success)
success = _dispatch(cmd, phase, FALSE, NULL);
break;
case CMD:
success = _dispatch(cmd, phase, FALSE, C_ANY);
if (!success)
success = _dispatch(cmd, phase, TRUE, NULL);
break;
case LOG_CMD:
case LOG_CMD_ERR:
(void) _dispatch(cmd, phase, FALSE, C_ANY);
(void) _dispatch(cmd, phase, FALSE, NULL);
break;
default:
errno = EINVAL;
return -1; // XXX: skips last state
}
...
/* Restore any previous pool to the Response API. */
pr_response_set_pool(resp_pool); // XXX: local
return success;
}
more than one pool can exist. This can be done by starting an ftp data transfer
via xfer_stor, or xfer_recv. Once inside a data transfer, the server will then
enter the pr_data_xfer function with a valid pool. Immediately after, the
server will return the old pool back to proftpd's allocation list but still
globally retain a reference to it as a response buffer. The next time a buffer
is allocated, the server will return this memory back to the caller. If a
response occurs, this will overwrite data that was allocated triggerring memory
corruption.
src/data.c:875
int pr_data_xfer(char *cl_buf, int cl_size) {
int len = 0;
int total = 0;
int res = 0;
π¦The Zero Day Initiative (ZDI) has disclosed a use-after-free memory corruption
error in the proftpd Response API code, via the security@proftpd.org address:
-- VULNERABILITY DETAILS ------------------------
1) This vulnerability is located within the ProFTPd daemon and occurs due to the way the server manages pools that are used for responses send by the server to
the client.
2) When attempting to handle an exceptional condition the server will
fail to restore a pointer that is used to contain an ftp response, and as such can be used to trigger a controlled memory corruption.
3) The core of this vulnerability is described in the following function which is located in src/main.c. The pr_cmd_dispatch_phase function is responsible for dispatching calls to any of the commands that are registered in the proftpd
modules/ list.
4) Upon entry of this function, the server essentially pushes the
state of the resp_pool for it to be restored upon return. However, if an error occurs while executing a precmd the server will fail to restore the state.
5) These are done with the pr_response_get_pool() and pr_response_set_pool(...)
functions.
src/main.c:659
int pr_cmd_dispatch_phase(cmd_rec *cmd, int phase, int flags) {
char *cp = NULL;
int success = 0;
pool *resp_pool = NULL; // XXX
...
/* Get any previous pool that may be being used by the Response API.
*
* In most cases, this will be NULL. However, if proftpd is in the
* midst of a data transfer when a command comes in on the control
* connection, then the pool in use will be that of the data transfer
* instigating command. We want to stash that pool, so that after this
* command is dispatched, we can return the pool of the old command.
* Otherwise, Bad Things (segfaults) happen.
*/
resp_pool = pr_response_get_pool(); // XXX: local that's cmd->pool
/* Set the pool used by the Response API for this command. */
pr_response_set_pool(cmd->pool); // XXX
...
if (phase == 0) {
/* First, dispatch to wildcard PRE_CMD handlers. */
success = _dispatch(cmd, PRE_CMD, FALSE, C_ANY);
if (!success) /* run other pre_cmd */
success = _dispatch(cmd, PRE_CMD, FALSE, NULL);
if (success < 0) {
/* Dispatch to POST_CMD_ERR handlers as well. */
_dispatch(cmd, POST_CMD_ERR, FALSE, C_ANY);
_dispatch(cmd, POST_CMD_ERR, FALSE, NULL);
_dispatch(cmd, LOG_CMD_ERR, FALSE, C_ANY);
_dispatch(cmd, LOG_CMD_ERR, FALSE, NULL);
pr_response_flush(&resp_err_list);
return success; // XXX
}
...
} else {
switch (phase) {
case PRE_CMD:
case POST_CMD:
case POST_CMD_ERR:
success = _dispatch(cmd, phase, FALSE, C_ANY);
if (!success)
success = _dispatch(cmd, phase, FALSE, NULL);
break;
case CMD:
success = _dispatch(cmd, phase, FALSE, C_ANY);
if (!success)
success = _dispatch(cmd, phase, TRUE, NULL);
break;
case LOG_CMD:
case LOG_CMD_ERR:
(void) _dispatch(cmd, phase, FALSE, C_ANY);
(void) _dispatch(cmd, phase, FALSE, NULL);
break;
default:
errno = EINVAL;
return -1; // XXX: skips last state
}
...
/* Restore any previous pool to the Response API. */
pr_response_set_pool(resp_pool); // XXX: local
return success;
}
more than one pool can exist. This can be done by starting an ftp data transfer
via xfer_stor, or xfer_recv. Once inside a data transfer, the server will then
enter the pr_data_xfer function with a valid pool. Immediately after, the
server will return the old pool back to proftpd's allocation list but still
globally retain a reference to it as a response buffer. The next time a buffer
is allocated, the server will return this memory back to the caller. If a
response occurs, this will overwrite data that was allocated triggerring memory
corruption.
src/data.c:875
int pr_data_xfer(char *cl_buf, int cl_size) {
int len = 0;
int total = 0;
int res = 0;
/* Poll the control channel for any commands we should handle, like
* QUIT or ABOR.
*/
...
for (ch = cmd->argv[0]; *ch; ch++)
*ch = toupper(*ch);
/* Only handle commands which do not involve data transfers; we
* already have a data transfer in progress. For any data transfer
* command, send a 450 ("busy") reply. Looks like almost all of the
* data transfer commands accept that response, as per RFC959.
*
* We also prevent the EPRT, EPSV, PASV, and PORT commands, since
* they will also interfere with the current data transfer. In doing
* so, we break RFC compliance a little; RFC959 does not allow a
* response code of 450 for those commands (although it should).
*/
if (strcmp(cmd->argv[0], C_APPE) == 0 ||
strcmp(cmd->argv[0], C_LIST) == 0 ||
strcmp(cmd->argv[0], C_MLSD) == 0 ||
strcmp(cmd->argv[0], C_NLST) == 0 ||
strcmp(cmd->argv[0], C_RETR) == 0 ||
strcmp(cmd->argv[0], C_STOR) == 0 ||
strcmp(cmd->argv[0], C_STOU) == 0 ||
strcmp(cmd->argv[0], C_RNFR) == 0 ||
strcmp(cmd->argv[0], C_RNTO) == 0 ||
strcmp(cmd->argv[0], C_PORT) == 0 ||
strcmp(cmd->argv[0], C_EPRT) == 0 ||
strcmp(cmd->argv[0], C_PASV) == 0 ||
strcmp(cmd->argv[0], C_EPSV) == 0) {
pool *resp_pool;
...
} else if (strcmp(cmd->argv[0], C_NOOP) == 0) {
...
} else {
pr_cmd_dispatch(cmd); // XXX
...
destroy_pool(cmd->pool); // XXX
...
return (len < 0 ? -1 : len);
}
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
* QUIT or ABOR.
*/
...
for (ch = cmd->argv[0]; *ch; ch++)
*ch = toupper(*ch);
/* Only handle commands which do not involve data transfers; we
* already have a data transfer in progress. For any data transfer
* command, send a 450 ("busy") reply. Looks like almost all of the
* data transfer commands accept that response, as per RFC959.
*
* We also prevent the EPRT, EPSV, PASV, and PORT commands, since
* they will also interfere with the current data transfer. In doing
* so, we break RFC compliance a little; RFC959 does not allow a
* response code of 450 for those commands (although it should).
*/
if (strcmp(cmd->argv[0], C_APPE) == 0 ||
strcmp(cmd->argv[0], C_LIST) == 0 ||
strcmp(cmd->argv[0], C_MLSD) == 0 ||
strcmp(cmd->argv[0], C_NLST) == 0 ||
strcmp(cmd->argv[0], C_RETR) == 0 ||
strcmp(cmd->argv[0], C_STOR) == 0 ||
strcmp(cmd->argv[0], C_STOU) == 0 ||
strcmp(cmd->argv[0], C_RNFR) == 0 ||
strcmp(cmd->argv[0], C_RNTO) == 0 ||
strcmp(cmd->argv[0], C_PORT) == 0 ||
strcmp(cmd->argv[0], C_EPRT) == 0 ||
strcmp(cmd->argv[0], C_PASV) == 0 ||
strcmp(cmd->argv[0], C_EPSV) == 0) {
pool *resp_pool;
...
} else if (strcmp(cmd->argv[0], C_NOOP) == 0) {
...
} else {
pr_cmd_dispatch(cmd); // XXX
...
destroy_pool(cmd->pool); // XXX
...
return (len < 0 ? -1 : len);
}
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The Zero Day Initiative (ZDI) has disclosed a use-after-free memory corruption
Forwarded from iUNDERCODE - iOs JAILBREAK & MODS
π¦iOS/iPadOS 13.5.1 released: fix unc0ver jailbreak vulnerability Apple Pay supports Octopus
1) Today Apple released iOS/iPadOS 13.5.1 and watchOS 6.2.6 system updates, with a security fix for iOS/iPadOS 13.5 released two weeks ago. The vulnerabilities of unc0ver jailbreak are clearly mentioned in the security update log, so users who want to jailbreak try not to upgrade.
2) iOS/iPadOS 13.5.1 The installation package for this update has a capacity of 77.5MB, mainly due to security improvements, but one of the new features introduced is that Apple Pay supports Octopus in Hong Kong, China.
β β βiο½ππ»βΊπ«Δπ¬πβ β β β
1) Today Apple released iOS/iPadOS 13.5.1 and watchOS 6.2.6 system updates, with a security fix for iOS/iPadOS 13.5 released two weeks ago. The vulnerabilities of unc0ver jailbreak are clearly mentioned in the security update log, so users who want to jailbreak try not to upgrade.
2) iOS/iPadOS 13.5.1 The installation package for this update has a capacity of 77.5MB, mainly due to security improvements, but one of the new features introduced is that Apple Pay supports Octopus in Hong Kong, China.
β β βiο½ππ»βΊπ«Δπ¬πβ β β β
Forwarded from .
This Channel Link is:
Hacked/Revoqued due to harm on Undercode
For learn hack, expert white hats :
T.me/UndercodeTesting
Hacked/Revoqued due to harm on Undercode
For learn hack, expert white hats :
T.me/UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ More often people usually use passwords containing their name, mobile number, etc. such passwords can be easily guessed by an attacker.
BASIC HACKING TIPS : git sources :
1) Try to make stronger passwords.
I see, many people use the same password everywhere they've account (or want to create one). This is suicidal. You can use same password but only if you're logging into a trusted website/app. Trusted websites store your passwords in encrypted format. So even if an attacker gains access to the database, they cannot login to your account because the password can't be simply decrypted. Now suppose, some new website, is not trusted (at least by you :p) may or may not store the passwords with some encryption. If they don't an attacker can simply login to your other account (google, facebook,etc.) if you're using the same password.
2) Don't use your regular password if the website/app you're logging in is not trusted, at least by you.
If you haven't heard about phishing , then you should. Phising is an old, traditional way to retrieve your account password. The basic idea behind phishing is to create a copy of login or whole website and allow user to login so as to save account credentials. eg. an attacker creates a copy of gmail page, which exactly looks similar to the original, but coded in a way that it will store credentials whenever someone tries to login through that page. Now the attacker will share the link of his phising page somehow (through mails, messages, web links, etc.) and attacker has all the credentials of all the users who tried to login through phishing page. Check out this Phishing Tutorial
3) Always confirm the url of the website you're loggin in. Don't try to login with your original credentials on some fake, similar looking page having some other domain.
ie. don't try to login on phishing.etc/github (some github phishing page) with your original github.com credentials. Note that git-hub.com and github.com are two different domains. :D
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ More often people usually use passwords containing their name, mobile number, etc. such passwords can be easily guessed by an attacker.
BASIC HACKING TIPS : git sources :
1) Try to make stronger passwords.
I see, many people use the same password everywhere they've account (or want to create one). This is suicidal. You can use same password but only if you're logging into a trusted website/app. Trusted websites store your passwords in encrypted format. So even if an attacker gains access to the database, they cannot login to your account because the password can't be simply decrypted. Now suppose, some new website, is not trusted (at least by you :p) may or may not store the passwords with some encryption. If they don't an attacker can simply login to your other account (google, facebook,etc.) if you're using the same password.
2) Don't use your regular password if the website/app you're logging in is not trusted, at least by you.
If you haven't heard about phishing , then you should. Phising is an old, traditional way to retrieve your account password. The basic idea behind phishing is to create a copy of login or whole website and allow user to login so as to save account credentials. eg. an attacker creates a copy of gmail page, which exactly looks similar to the original, but coded in a way that it will store credentials whenever someone tries to login through that page. Now the attacker will share the link of his phising page somehow (through mails, messages, web links, etc.) and attacker has all the credentials of all the users who tried to login through phishing page. Check out this Phishing Tutorial
3) Always confirm the url of the website you're loggin in. Don't try to login with your original credentials on some fake, similar looking page having some other domain.
ie. don't try to login on phishing.etc/github (some github phishing page) with your original github.com credentials. Note that git-hub.com and github.com are two different domains. :D
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The best free YouTube downloader for windows
https://www.4kdownload.com/products/product-videodownloader
https://www.winxdvd.com/youtube-downloader/?__c=1
https://www.any-video-converter.com/products/for_video_free/?__c=1
https://www.dvdvideosoft.com/products/dvd/Free-YouTube-Download.htm
https://www.atube.me/
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The best free YouTube downloader for windows
https://www.4kdownload.com/products/product-videodownloader
https://www.winxdvd.com/youtube-downloader/?__c=1
https://www.any-video-converter.com/products/for_video_free/?__c=1
https://www.dvdvideosoft.com/products/dvd/Free-YouTube-Download.htm
https://www.atube.me/
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
4K Download
4K Video Downloader Plus | Free Download from YouTube, TikTok, Facebook, SoundCloud
The simplest video downloader, ever! Download video and audio from YouTube and similar services on macOS, PC and Linux absolutely for free!
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ Detailed Nginx status monitoring and log analysis
A) Nginx status monitoring
1) Nginx provides a built-in status information monitoring page that can be used to monitor the overall access of Nginx. This function is implemented by the ngx_http_stub_status_module module.
2) Use the nginx -V 2>&1 | grep -o with-http_stub_status_module command to check whether the current Nginx has the status function. If it outputs ngx_http_stub_status_module, it means yes. If not, you can add this module at compile time.
3) By default, status is turned off, we need to turn it on, and specify uri to access the data.
π¦ Detailed Nginx status monitoring and log analysis
A) Nginx status monitoring
1) Nginx provides a built-in status information monitoring page that can be used to monitor the overall access of Nginx. This function is implemented by the ngx_http_stub_status_module module.
2) Use the nginx -V 2>&1 | grep -o with-http_stub_status_module command to check whether the current Nginx has the status function. If it outputs ngx_http_stub_status_module, it means yes. If not, you can add this module at compile time.
3) By default, status is turned off, we need to turn it on, and specify uri to access the data.
> the code :
server {
listen 80;
server_name default_server;
location /status {
stub_status on;
allow 114.247.125.227;
}
}
server {
listen 80;
server_name default_server;
location /status {
stub_status on;
allow 114.247.125.227;
}
}
4) The allow configuration allows only specified IPs to access the nginx status function, and removing it means no restrictions.
5) After restarting Nginx, the browser visits http://{IP}/status to view the status monitoring information
5) After restarting Nginx, the browser visits http://{IP}/status to view the status monitoring information
6) Active connections: the current number of client active connections (including waiting client connections), equivalent to TCP connection status in Established and SYN_ACK
7) accepts: the total number of accepted client connections, that is, the connections that have been received by the worker process
8) handled: The total number of connections that have been handled
9) requests: the total number of HTTP requests from the client
10) Reading: The number of http requests currently being read (the http request header is read)
11) Writing: The number of connections currently prepared to respond (written to the http response header)
Waiting: The number of idle client requests currently waiting, the waiting time is the interval between Reading and Writing
12) After collecting Nginx data, you can use the monitoring tool to monitor it.
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
7) accepts: the total number of accepted client connections, that is, the connections that have been received by the worker process
8) handled: The total number of connections that have been handled
9) requests: the total number of HTTP requests from the client
10) Reading: The number of http requests currently being read (the http request header is read)
11) Writing: The number of connections currently prepared to respond (written to the http response header)
Waiting: The number of idle client requests currently waiting, the waiting time is the interval between Reading and Writing
12) After collecting Nginx data, you can use the monitoring tool to monitor it.
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
B) Log analysis :
1) The default log format configuration of Nginx can be found in /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_response_time';
2) Examples of printed logs
39.105.66.117-mp [11/Sep/2019:19:03:01 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/4.5.5 (Java/1.8. 0_161)" "-" 0.040 0.040
39.105.66.117-mp [11/Sep/2019:19:03:08 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/ 4.5.5 (Java/1.8.0_161)" "-" 0.008 0.008
π¦
1) $remote_addr: client IP address
2) $remote_user: used to record the user name of the remote client
3) $time_local: used to record access time and time zone
4) $request: Used to record the request URL and request method
5) $status: response status code
6) $body_bytes_sent: Number of bytes of file body content sent to the client
7) $http_referer: It can record from which link the user came from
8) $http_user_agent: the browser information used by the user
9) $http_x_forwarded_for: can record the client IP, through the proxy server to record the client's IP address
10) $request_time: refers to the time from receiving the first byte of the user request to sending the response data, that is, $request_time includes the time to receive the client request data, the back-end program response time, and the time to send the response data to the client
11) $upstream_response_time: Time to receive response from upstream server
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
1) The default log format configuration of Nginx can be found in /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_response_time';
2) Examples of printed logs
39.105.66.117-mp [11/Sep/2019:19:03:01 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/4.5.5 (Java/1.8. 0_161)" "-" 0.040 0.040
39.105.66.117-mp [11/Sep/2019:19:03:08 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/ 4.5.5 (Java/1.8.0_161)" "-" 0.008 0.008
π¦
1) $remote_addr: client IP address
2) $remote_user: used to record the user name of the remote client
3) $time_local: used to record access time and time zone
4) $request: Used to record the request URL and request method
5) $status: response status code
6) $body_bytes_sent: Number of bytes of file body content sent to the client
7) $http_referer: It can record from which link the user came from
8) $http_user_agent: the browser information used by the user
9) $http_x_forwarded_for: can record the client IP, through the proxy server to record the client's IP address
10) $request_time: refers to the time from receiving the first byte of the user request to sending the response data, that is, $request_time includes the time to receive the client request data, the back-end program response time, and the time to send the response data to the client
11) $upstream_response_time: Time to receive response from upstream server
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Common analysis commands:
πππ£'π’ π’π£ππ‘π£ :
1) Statistic UV according to visiting IP
awk '{print $1}' paycenteraccess.log | sort -n | uniq | wc -l
2) Query the most frequently visited IP (top 10)
aWk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c | sort -rn | head -n 10
3) View the number of IP visits in a certain period of time (1-8 o'clock)
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c| sort -nr |wc -l
4) View IPs accessed more than 100 times
awk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c |awk '{if($1 >100) print $0}'|sort -rn
5) View the URLs and number of visits of the specified IP
grep "39.105.67.140" /var/log/nginx/access.log|awk '{print $7}' |sort |uniq -c |sort -n -k 1 -r
6) Statistic PV based on visit URL
cat /var/log/nginx/access.log |awk '{print $7}' |wc -l
7) Query the most frequently visited URL (top 10)
awk '{print $7}' /var/log/nginx/access.log | sort |uniq -c | sort -rn | head -n 10
8) View the most frequently visited URL ([exclude /api/appid]) (top 10)
grep -v '/api/appid' /var/log/nginx/access.log|awk '{print $7}' | sort |uniq -c | sort -rn | head -n 10
9) View pages with more than 100 page views
cat /var/log/nginx/access.log | cut -d ' ' -f 7 | sort |uniq -c | awk '{if ($1 > 100) print $0}' | less
10) View the most recent 1000 records and the most visited pages
tail -1000 /var/log/nginx/access.log |awk '{print $7}'|sort|uniq -c|sort -nr|less
11) Count the number of requests per hour, the time point of top10 (accurate to hours)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-15|sort|uniq -c|sort -nr|head -n 10
12) Count the number of requests per minute, the time point of top10 (accurate to minutes)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-18|sort|uniq -c|sort -nr|head -n 10
13. Count the number of requests per second, the time point of top10 (accurate to seconds)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-21|sort|uniq -c|sort -nr|head -n 10
14) Find logs for a specified period of time
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log
15) List urls with transmission time over 0.6 seconds, display the first 10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
16) List the time points when the /api/appid request time exceeds 0.6 seconds
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
17) Get the top 10 most time-consuming request time, url, time-consuming
cat /var/log/nginx/access.log |awk '{print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' | sort -k3 -rn | head -10
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Common analysis commands:
πππ£'π’ π’π£ππ‘π£ :
1) Statistic UV according to visiting IP
awk '{print $1}' paycenteraccess.log | sort -n | uniq | wc -l
2) Query the most frequently visited IP (top 10)
aWk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c | sort -rn | head -n 10
3) View the number of IP visits in a certain period of time (1-8 o'clock)
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c| sort -nr |wc -l
4) View IPs accessed more than 100 times
awk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c |awk '{if($1 >100) print $0}'|sort -rn
5) View the URLs and number of visits of the specified IP
grep "39.105.67.140" /var/log/nginx/access.log|awk '{print $7}' |sort |uniq -c |sort -n -k 1 -r
6) Statistic PV based on visit URL
cat /var/log/nginx/access.log |awk '{print $7}' |wc -l
7) Query the most frequently visited URL (top 10)
awk '{print $7}' /var/log/nginx/access.log | sort |uniq -c | sort -rn | head -n 10
8) View the most frequently visited URL ([exclude /api/appid]) (top 10)
grep -v '/api/appid' /var/log/nginx/access.log|awk '{print $7}' | sort |uniq -c | sort -rn | head -n 10
9) View pages with more than 100 page views
cat /var/log/nginx/access.log | cut -d ' ' -f 7 | sort |uniq -c | awk '{if ($1 > 100) print $0}' | less
10) View the most recent 1000 records and the most visited pages
tail -1000 /var/log/nginx/access.log |awk '{print $7}'|sort|uniq -c|sort -nr|less
11) Count the number of requests per hour, the time point of top10 (accurate to hours)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-15|sort|uniq -c|sort -nr|head -n 10
12) Count the number of requests per minute, the time point of top10 (accurate to minutes)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-18|sort|uniq -c|sort -nr|head -n 10
13. Count the number of requests per second, the time point of top10 (accurate to seconds)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-21|sort|uniq -c|sort -nr|head -n 10
14) Find logs for a specified period of time
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log
15) List urls with transmission time over 0.6 seconds, display the first 10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
16) List the time points when the /api/appid request time exceeds 0.6 seconds
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
17) Get the top 10 most time-consuming request time, url, time-consuming
cat /var/log/nginx/access.log |awk '{print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' | sort -k3 -rn | head -10
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Animation Fundamentals β-1.73 GB purchased by undercode
https://cloud.blender.org/p/animation-fundamentals/
> Download <
π¦Animation Fundamentals β-1.73 GB purchased by undercode
https://cloud.blender.org/p/animation-fundamentals/
> Download <
Blender Studio
Animation Fundamentals - Blender Studio
An introduction to the principles of animation, using Blender.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Updated Generate Unlimited instagram acoounts :
βΎββββΆβββΎββΆββΎββ & βββ :
1) Create a new virtualenv
2) clone https://github.com/FeezyHendrix/Insta-mass-account-creator
2) Requirements:
run pip install -r requirements.txt
3) Download chrome driver
configure it to path
4) open config.py in modules
Config
π¦Usage
chromedriver_path Path to chromedriver
bot_type Default is 1 to use selenium to create accounts or use 2 to use python requests
password General password for Each account generated to be able to
login
use_local_ip_address using local Ip to create account, default is False
use_custom_proxy use your own custom proxy, Default is False change to True add list of proxies to Assets/proxies.txt
amount_of_account amount of account to create
proxy_file_path Path to the proxy file .txt format
amount_per_proxy for custom proxy, amount of account to
create for each proxy
email_domain for custom domain name, is useful for use
own email_domain
country the country of account
identity the complete name of created accounts
5) run python creator.py
6) All username are stored in Assets/usernames.txt
β
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Updated Generate Unlimited instagram acoounts :
βΎββββΆβββΎββΆββΎββ & βββ :
1) Create a new virtualenv
2) clone https://github.com/FeezyHendrix/Insta-mass-account-creator
2) Requirements:
run pip install -r requirements.txt
3) Download chrome driver
configure it to path
4) open config.py in modules
Config
π¦Usage
chromedriver_path Path to chromedriver
bot_type Default is 1 to use selenium to create accounts or use 2 to use python requests
password General password for Each account generated to be able to
login
use_local_ip_address using local Ip to create account, default is False
use_custom_proxy use your own custom proxy, Default is False change to True add list of proxies to Assets/proxies.txt
amount_of_account amount of account to create
proxy_file_path Path to the proxy file .txt format
amount_per_proxy for custom proxy, amount of account to
create for each proxy
email_domain for custom domain name, is useful for use
own email_domain
country the country of account
identity the complete name of created accounts
5) run python creator.py
6) All username are stored in Assets/usernames.txt
β
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
GitHub
GitHub - FeezyHendrix/Insta-mass-account-creator: Instagram Account Creator 2024 - Not Maintained
Instagram Account Creator 2024 - Not Maintained. Contribute to FeezyHendrix/Insta-mass-account-creator development by creating an account on GitHub.