UNDERCODE COMMUNITY
2.67K subscribers
1.23K photos
31 videos
2.65K files
79.8K links
πŸ¦‘ Undercode Cyber World!
@UndercodeCommunity


1️⃣ World first platform which Collect & Analyzes every New hacking method.
+ AI Pratice
@Undercode_Testing

2️⃣ Cyber & Tech NEWS:
@Undercode_News

3️⃣ CVE @Daily_CVE

✨ Web & Services:
β†’ Undercode.help
Download Telegram
Forwarded from UNDERCODE TESTING
πŸ¦‘Bug bounty tips ✨

Xss πŸ’° Methodology πŸ’―

1- Pick a target

2- Do Full depth Subdomain enumeration using Subfinder( along API'S ) and use webcopilot or SubDomz and various subdomains finder tools in one liner and also ones perform subdomain bruteforicng and save it in a file.!!

3- subfinder -d example.com -all >> subs.txt

4- cat subs.txt | httpx -o alive-subs.txt



hashtag#Method-1 ( Using Dalfox )

1- katana -u alive-subs.txt -o endpoints-1.txt

2- waybackurls http://example.com | grep = | tee endpoints-2.txt

3- ./gau example.com >> endpoints-3.txt

4- paramspider -d example.com

5 - cat alive-subs.txt | hakrawler | tee -a endpoints-5.txt

6- cat endpoints.txt | uro | tee -a endpoints-uro.txt ( Combine all URLS )

7- cat endpoints-uro.txt | Gxss | dalfox pipe --multicast --skip-mining-all (Accurate also ) ( Here Gxss helps us when payload is injected is reflecting back ?? and I used skip mining because already we got urls nah ! if want remove it )

[ OR ]
8- dalfox url http://example.com --custom-payload payloads.txt ( Simple Scan )

Method-2 ( Using XSS_vibes )
1- katana -u alive-subs.txt -o endpoints-1.txt

2- waybackurls http://example.com | grep = | tee endpoints-2.txt

3- ./gau example.com >> endpoints-3.txt

4- paramspider -d example.com

5 - cat alive-subs.txt | hakrawler | tee -a endpoints-5.txt

6- cat endpoints.txt | uro | tee -a endpoints-uro.txt

7- cat endpoints-uro.txt | ./gf xss | sed 's/=.*/=/' -o output.txt

8- python3 main.py -f input.txt -o <output>

Note :- if u can use Alternative of xss automation Tool For better Result U can Use
Xssorv2 Ibrahim HusiΔ‡ Tool it's effective and 100 Acuracy πŸ’―

Ref: Linkedin_stuffs
@UndercodeCommunity
▁ β–‚ β–„ Uπ•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁
Forwarded from UNDERCODE TESTING
πŸ¦‘Ai Model for Hackers:


4 Security AI for Pentesting

>>
This model is designed to accurately detect and classify commands associated with four essential security tools used in pentesting: Nmap, Metasploit, John the Ripper, and the Social Engineering Toolkit (SET). It leverages a Naive Bayes classifier trained on a comprehensive dataset of commands for these tools, enhancing the accuracy and effectiveness of recognizing and categorizing such commands.


Tools Included

1️⃣Nmap: A network scanning tool used to discover hosts and services on a computer network.

2️⃣Metasploit (msploit): A penetration testing framework for exploiting known vulnerabilities.

3️⃣John the Ripper (jtr): A password cracking software used to test password strength and recover lost passwords.

4️⃣Social Engineering Toolkit (SET): A collection of tools for conducting social engineering attacks.

>> Structure
The model has been trained to detect commands formatted to specify the tool being used. Each command or query is associated with one of the four tools, allowing for precise classification.

Example:

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report
import joblib

# Load the dataset from the txt file
data_path = 'trainingdata.txt'
data = []

# Read the file and parse the data
with open(data_path, 'r') as file:
lines = file.readlines()
for line in lines:
# Split each line into question and tool by the last comma
parts = line.rsplit(', "', 1)
if len(parts) == 2:
question = parts[0].strip().strip('"')
tool = parts[1].strip().strip('",')
data.append((question, tool))

# Create a DataFrame
df = pd.DataFrame(data, columns=['question', 'tool'])

# Split the data
X_train, X_test, y_train, y_test = train_test_split(df['question'], df['tool'], test_size=0.2, random_state=42)

# Vectorize the text data
vectorizer = TfidfVectorizer()
X_train_vectorized = vectorizer.fit_transform(X_train)
X_test_vectorized = vectorizer.transform(X_test)

# Train a Naive Bayes classifier
clf = MultinomialNB()
clf.fit(X_train_vectorized, y_train)

# Make predictions
y_pred = clf.predict(X_test_vectorized)

# Print the classification report
print(classification_report(y_test, y_pred))

# Save the model and vectorizer
joblib.dump(clf, 'findtool_model.pkl')
joblib.dump(vectorizer, 'vectorizer.pkl')

@UndercodeCommunity
▁ β–‚ β–„ Uπ•Ÿπ”»β’Ίπ«Δ†π”¬π““β“” β–„ β–‚ ▁