Forwarded from .
This Channel Link is:
Hacked/Revoqued due to harm on Undercode
For learn hack, expert white hats :
T.me/UndercodeTesting
Hacked/Revoqued due to harm on Undercode
For learn hack, expert white hats :
T.me/UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ More often people usually use passwords containing their name, mobile number, etc. such passwords can be easily guessed by an attacker.
BASIC HACKING TIPS : git sources :
1) Try to make stronger passwords.
I see, many people use the same password everywhere they've account (or want to create one). This is suicidal. You can use same password but only if you're logging into a trusted website/app. Trusted websites store your passwords in encrypted format. So even if an attacker gains access to the database, they cannot login to your account because the password can't be simply decrypted. Now suppose, some new website, is not trusted (at least by you :p) may or may not store the passwords with some encryption. If they don't an attacker can simply login to your other account (google, facebook,etc.) if you're using the same password.
2) Don't use your regular password if the website/app you're logging in is not trusted, at least by you.
If you haven't heard about phishing , then you should. Phising is an old, traditional way to retrieve your account password. The basic idea behind phishing is to create a copy of login or whole website and allow user to login so as to save account credentials. eg. an attacker creates a copy of gmail page, which exactly looks similar to the original, but coded in a way that it will store credentials whenever someone tries to login through that page. Now the attacker will share the link of his phising page somehow (through mails, messages, web links, etc.) and attacker has all the credentials of all the users who tried to login through phishing page. Check out this Phishing Tutorial
3) Always confirm the url of the website you're loggin in. Don't try to login with your original credentials on some fake, similar looking page having some other domain.
ie. don't try to login on phishing.etc/github (some github phishing page) with your original github.com credentials. Note that git-hub.com and github.com are two different domains. :D
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ More often people usually use passwords containing their name, mobile number, etc. such passwords can be easily guessed by an attacker.
BASIC HACKING TIPS : git sources :
1) Try to make stronger passwords.
I see, many people use the same password everywhere they've account (or want to create one). This is suicidal. You can use same password but only if you're logging into a trusted website/app. Trusted websites store your passwords in encrypted format. So even if an attacker gains access to the database, they cannot login to your account because the password can't be simply decrypted. Now suppose, some new website, is not trusted (at least by you :p) may or may not store the passwords with some encryption. If they don't an attacker can simply login to your other account (google, facebook,etc.) if you're using the same password.
2) Don't use your regular password if the website/app you're logging in is not trusted, at least by you.
If you haven't heard about phishing , then you should. Phising is an old, traditional way to retrieve your account password. The basic idea behind phishing is to create a copy of login or whole website and allow user to login so as to save account credentials. eg. an attacker creates a copy of gmail page, which exactly looks similar to the original, but coded in a way that it will store credentials whenever someone tries to login through that page. Now the attacker will share the link of his phising page somehow (through mails, messages, web links, etc.) and attacker has all the credentials of all the users who tried to login through phishing page. Check out this Phishing Tutorial
3) Always confirm the url of the website you're loggin in. Don't try to login with your original credentials on some fake, similar looking page having some other domain.
ie. don't try to login on phishing.etc/github (some github phishing page) with your original github.com credentials. Note that git-hub.com and github.com are two different domains. :D
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The best free YouTube downloader for windows
https://www.4kdownload.com/products/product-videodownloader
https://www.winxdvd.com/youtube-downloader/?__c=1
https://www.any-video-converter.com/products/for_video_free/?__c=1
https://www.dvdvideosoft.com/products/dvd/Free-YouTube-Download.htm
https://www.atube.me/
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦The best free YouTube downloader for windows
https://www.4kdownload.com/products/product-videodownloader
https://www.winxdvd.com/youtube-downloader/?__c=1
https://www.any-video-converter.com/products/for_video_free/?__c=1
https://www.dvdvideosoft.com/products/dvd/Free-YouTube-Download.htm
https://www.atube.me/
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
4K Download
4K Video Downloader Plus | Free Download from YouTube, TikTok, Facebook, SoundCloud
The simplest video downloader, ever! Download video and audio from YouTube and similar services on macOS, PC and Linux absolutely for free!
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ Detailed Nginx status monitoring and log analysis
A) Nginx status monitoring
1) Nginx provides a built-in status information monitoring page that can be used to monitor the overall access of Nginx. This function is implemented by the ngx_http_stub_status_module module.
2) Use the nginx -V 2>&1 | grep -o with-http_stub_status_module command to check whether the current Nginx has the status function. If it outputs ngx_http_stub_status_module, it means yes. If not, you can add this module at compile time.
3) By default, status is turned off, we need to turn it on, and specify uri to access the data.
π¦ Detailed Nginx status monitoring and log analysis
A) Nginx status monitoring
1) Nginx provides a built-in status information monitoring page that can be used to monitor the overall access of Nginx. This function is implemented by the ngx_http_stub_status_module module.
2) Use the nginx -V 2>&1 | grep -o with-http_stub_status_module command to check whether the current Nginx has the status function. If it outputs ngx_http_stub_status_module, it means yes. If not, you can add this module at compile time.
3) By default, status is turned off, we need to turn it on, and specify uri to access the data.
> the code :
server {
listen 80;
server_name default_server;
location /status {
stub_status on;
allow 114.247.125.227;
}
}
server {
listen 80;
server_name default_server;
location /status {
stub_status on;
allow 114.247.125.227;
}
}
4) The allow configuration allows only specified IPs to access the nginx status function, and removing it means no restrictions.
5) After restarting Nginx, the browser visits http://{IP}/status to view the status monitoring information
5) After restarting Nginx, the browser visits http://{IP}/status to view the status monitoring information
6) Active connections: the current number of client active connections (including waiting client connections), equivalent to TCP connection status in Established and SYN_ACK
7) accepts: the total number of accepted client connections, that is, the connections that have been received by the worker process
8) handled: The total number of connections that have been handled
9) requests: the total number of HTTP requests from the client
10) Reading: The number of http requests currently being read (the http request header is read)
11) Writing: The number of connections currently prepared to respond (written to the http response header)
Waiting: The number of idle client requests currently waiting, the waiting time is the interval between Reading and Writing
12) After collecting Nginx data, you can use the monitoring tool to monitor it.
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
7) accepts: the total number of accepted client connections, that is, the connections that have been received by the worker process
8) handled: The total number of connections that have been handled
9) requests: the total number of HTTP requests from the client
10) Reading: The number of http requests currently being read (the http request header is read)
11) Writing: The number of connections currently prepared to respond (written to the http response header)
Waiting: The number of idle client requests currently waiting, the waiting time is the interval between Reading and Writing
12) After collecting Nginx data, you can use the monitoring tool to monitor it.
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
B) Log analysis :
1) The default log format configuration of Nginx can be found in /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_response_time';
2) Examples of printed logs
39.105.66.117-mp [11/Sep/2019:19:03:01 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/4.5.5 (Java/1.8. 0_161)" "-" 0.040 0.040
39.105.66.117-mp [11/Sep/2019:19:03:08 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/ 4.5.5 (Java/1.8.0_161)" "-" 0.008 0.008
π¦
1) $remote_addr: client IP address
2) $remote_user: used to record the user name of the remote client
3) $time_local: used to record access time and time zone
4) $request: Used to record the request URL and request method
5) $status: response status code
6) $body_bytes_sent: Number of bytes of file body content sent to the client
7) $http_referer: It can record from which link the user came from
8) $http_user_agent: the browser information used by the user
9) $http_x_forwarded_for: can record the client IP, through the proxy server to record the client's IP address
10) $request_time: refers to the time from receiving the first byte of the user request to sending the response data, that is, $request_time includes the time to receive the client request data, the back-end program response time, and the time to send the response data to the client
11) $upstream_response_time: Time to receive response from upstream server
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
1) The default log format configuration of Nginx can be found in /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_response_time';
2) Examples of printed logs
39.105.66.117-mp [11/Sep/2019:19:03:01 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/4.5.5 (Java/1.8. 0_161)" "-" 0.040 0.040
39.105.66.117-mp [11/Sep/2019:19:03:08 +0800] "POST /salesplatform-gateway/users HTTP/1.1" 200 575 "-" "Apache-HttpClient/ 4.5.5 (Java/1.8.0_161)" "-" 0.008 0.008
π¦
1) $remote_addr: client IP address
2) $remote_user: used to record the user name of the remote client
3) $time_local: used to record access time and time zone
4) $request: Used to record the request URL and request method
5) $status: response status code
6) $body_bytes_sent: Number of bytes of file body content sent to the client
7) $http_referer: It can record from which link the user came from
8) $http_user_agent: the browser information used by the user
9) $http_x_forwarded_for: can record the client IP, through the proxy server to record the client's IP address
10) $request_time: refers to the time from receiving the first byte of the user request to sending the response data, that is, $request_time includes the time to receive the client request data, the back-end program response time, and the time to send the response data to the client
11) $upstream_response_time: Time to receive response from upstream server
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Common analysis commands:
πππ£'π’ π’π£ππ‘π£ :
1) Statistic UV according to visiting IP
awk '{print $1}' paycenteraccess.log | sort -n | uniq | wc -l
2) Query the most frequently visited IP (top 10)
aWk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c | sort -rn | head -n 10
3) View the number of IP visits in a certain period of time (1-8 o'clock)
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c| sort -nr |wc -l
4) View IPs accessed more than 100 times
awk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c |awk '{if($1 >100) print $0}'|sort -rn
5) View the URLs and number of visits of the specified IP
grep "39.105.67.140" /var/log/nginx/access.log|awk '{print $7}' |sort |uniq -c |sort -n -k 1 -r
6) Statistic PV based on visit URL
cat /var/log/nginx/access.log |awk '{print $7}' |wc -l
7) Query the most frequently visited URL (top 10)
awk '{print $7}' /var/log/nginx/access.log | sort |uniq -c | sort -rn | head -n 10
8) View the most frequently visited URL ([exclude /api/appid]) (top 10)
grep -v '/api/appid' /var/log/nginx/access.log|awk '{print $7}' | sort |uniq -c | sort -rn | head -n 10
9) View pages with more than 100 page views
cat /var/log/nginx/access.log | cut -d ' ' -f 7 | sort |uniq -c | awk '{if ($1 > 100) print $0}' | less
10) View the most recent 1000 records and the most visited pages
tail -1000 /var/log/nginx/access.log |awk '{print $7}'|sort|uniq -c|sort -nr|less
11) Count the number of requests per hour, the time point of top10 (accurate to hours)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-15|sort|uniq -c|sort -nr|head -n 10
12) Count the number of requests per minute, the time point of top10 (accurate to minutes)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-18|sort|uniq -c|sort -nr|head -n 10
13. Count the number of requests per second, the time point of top10 (accurate to seconds)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-21|sort|uniq -c|sort -nr|head -n 10
14) Find logs for a specified period of time
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log
15) List urls with transmission time over 0.6 seconds, display the first 10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
16) List the time points when the /api/appid request time exceeds 0.6 seconds
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
17) Get the top 10 most time-consuming request time, url, time-consuming
cat /var/log/nginx/access.log |awk '{print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' | sort -k3 -rn | head -10
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Common analysis commands:
πππ£'π’ π’π£ππ‘π£ :
1) Statistic UV according to visiting IP
awk '{print $1}' paycenteraccess.log | sort -n | uniq | wc -l
2) Query the most frequently visited IP (top 10)
aWk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c | sort -rn | head -n 10
3) View the number of IP visits in a certain period of time (1-8 o'clock)
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log | awk '{print $1}' | sort | uniq -c| sort -nr |wc -l
4) View IPs accessed more than 100 times
awk '{print $1}' /var/log/nginx/access.log | sort -n |uniq -c |awk '{if($1 >100) print $0}'|sort -rn
5) View the URLs and number of visits of the specified IP
grep "39.105.67.140" /var/log/nginx/access.log|awk '{print $7}' |sort |uniq -c |sort -n -k 1 -r
6) Statistic PV based on visit URL
cat /var/log/nginx/access.log |awk '{print $7}' |wc -l
7) Query the most frequently visited URL (top 10)
awk '{print $7}' /var/log/nginx/access.log | sort |uniq -c | sort -rn | head -n 10
8) View the most frequently visited URL ([exclude /api/appid]) (top 10)
grep -v '/api/appid' /var/log/nginx/access.log|awk '{print $7}' | sort |uniq -c | sort -rn | head -n 10
9) View pages with more than 100 page views
cat /var/log/nginx/access.log | cut -d ' ' -f 7 | sort |uniq -c | awk '{if ($1 > 100) print $0}' | less
10) View the most recent 1000 records and the most visited pages
tail -1000 /var/log/nginx/access.log |awk '{print $7}'|sort|uniq -c|sort -nr|less
11) Count the number of requests per hour, the time point of top10 (accurate to hours)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-15|sort|uniq -c|sort -nr|head -n 10
12) Count the number of requests per minute, the time point of top10 (accurate to minutes)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-18|sort|uniq -c|sort -nr|head -n 10
13. Count the number of requests per second, the time point of top10 (accurate to seconds)
awk '{print $4}' /var/log/nginx/access.log |cut -c 14-21|sort|uniq -c|sort -nr|head -n 10
14) Find logs for a specified period of time
awk '$4 >="[25/Mar/2020:01:00:00" && $4 <="[25/Mar/2020:08:00:00"' /var/log/nginx/access.log
15) List urls with transmission time over 0.6 seconds, display the first 10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
16) List the time points when the /api/appid request time exceeds 0.6 seconds
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
cat /var/log/nginx/access.log |awk '(substr($NF,2,5) > 0.6 && $7~/\/api\/appid/){print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' |sort -k3 -rn | head -10
17) Get the top 10 most time-consuming request time, url, time-consuming
cat /var/log/nginx/access.log |awk '{print $4,$7,substr($NF,2,5)}' | awk -F '"' '{print $1,$2,$3}' | sort -k3 -rn | head -10
written by Undercode
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Animation Fundamentals β-1.73 GB purchased by undercode
https://cloud.blender.org/p/animation-fundamentals/
> Download <
π¦Animation Fundamentals β-1.73 GB purchased by undercode
https://cloud.blender.org/p/animation-fundamentals/
> Download <
Blender Studio
Animation Fundamentals - Blender Studio
An introduction to the principles of animation, using Blender.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Updated Generate Unlimited instagram acoounts :
βΎββββΆβββΎββΆββΎββ & βββ :
1) Create a new virtualenv
2) clone https://github.com/FeezyHendrix/Insta-mass-account-creator
2) Requirements:
run pip install -r requirements.txt
3) Download chrome driver
configure it to path
4) open config.py in modules
Config
π¦Usage
chromedriver_path Path to chromedriver
bot_type Default is 1 to use selenium to create accounts or use 2 to use python requests
password General password for Each account generated to be able to
login
use_local_ip_address using local Ip to create account, default is False
use_custom_proxy use your own custom proxy, Default is False change to True add list of proxies to Assets/proxies.txt
amount_of_account amount of account to create
proxy_file_path Path to the proxy file .txt format
amount_per_proxy for custom proxy, amount of account to
create for each proxy
email_domain for custom domain name, is useful for use
own email_domain
country the country of account
identity the complete name of created accounts
5) run python creator.py
6) All username are stored in Assets/usernames.txt
β
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Updated Generate Unlimited instagram acoounts :
βΎββββΆβββΎββΆββΎββ & βββ :
1) Create a new virtualenv
2) clone https://github.com/FeezyHendrix/Insta-mass-account-creator
2) Requirements:
run pip install -r requirements.txt
3) Download chrome driver
configure it to path
4) open config.py in modules
Config
π¦Usage
chromedriver_path Path to chromedriver
bot_type Default is 1 to use selenium to create accounts or use 2 to use python requests
password General password for Each account generated to be able to
login
use_local_ip_address using local Ip to create account, default is False
use_custom_proxy use your own custom proxy, Default is False change to True add list of proxies to Assets/proxies.txt
amount_of_account amount of account to create
proxy_file_path Path to the proxy file .txt format
amount_per_proxy for custom proxy, amount of account to
create for each proxy
email_domain for custom domain name, is useful for use
own email_domain
country the country of account
identity the complete name of created accounts
5) run python creator.py
6) All username are stored in Assets/usernames.txt
β
@UndercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
GitHub
GitHub - FeezyHendrix/Insta-mass-account-creator: Instagram Account Creator 2024 - Not Maintained
Instagram Account Creator 2024 - Not Maintained. Contribute to FeezyHendrix/Insta-mass-account-creator development by creating an account on GitHub.
UNDERCODE COMMUNITY
ANDROID-7-GMAIL EXPLOITS-.py
not uploaded to github