β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#fastTips Changing Your Authentication Model :
Β» > Problem
You need the change the authentication model from the default User.
Your application is using namespaces or you want to use a differently named model for users.
Β» > Solution
Edit app/config/auth.php to change the model.
'model' => 'MyApp\Models\User',
Discussion
Donβt forget the required interfaces.
π¦If youβre using your own model itβs important that your model implements Authβs UserInterface. If youβre implementing the password reminder feature it should also implement RemindableInterface.
<?php namespace MyApp\Models;
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
class User extends \Eloquent implements UserInterface, RemindableInterface
{
...
}
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#fastTips Changing Your Authentication Model :
Β» > Problem
You need the change the authentication model from the default User.
Your application is using namespaces or you want to use a differently named model for users.
Β» > Solution
Edit app/config/auth.php to change the model.
'model' => 'MyApp\Models\User',
Discussion
Donβt forget the required interfaces.
π¦If youβre using your own model itβs important that your model implements Authβs UserInterface. If youβre implementing the password reminder feature it should also implement RemindableInterface.
<?php namespace MyApp\Models;
use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;
class User extends \Eloquent implements UserInterface, RemindableInterface
{
...
}
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from Backup Legal Mega
π¦full Game C# Programming Bootcamp β6.17 GBβ
https://mega.nz/folder/dFgw1SLa#jSH1Arv8ARrYWKSYMpTOyQ
https://mega.nz/folder/dFgw1SLa#jSH1Arv8ARrYWKSYMpTOyQ
mega.nz
6.18 GB folder on MEGA
58 files and 14 subfolders
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Methode 2020 for Get Instagram posts/profile/hashtag data without using Instagram API. crawler.py
Like posts automatically. liker.py
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) Make sure you have Chrome browser installed.
2) Download chromedriver and put it into bin folder: ./inscrawler/bin/chromedriver
> https://sites.google.com/a/chromium.org/chromedriver/
3) Install Selenium: pip3 install -r requirements.txt
4) cp inscrawler/secret.py.dist inscrawler/secret.py
E X A M P L E S :
python crawler.py posts_full -u cal_foodie -n 100 -o ./output
python crawler.py posts_full -u cal_foodie -n 10 --fetch_likers --fetch_likes_plays
python crawler.py posts_full -u cal_foodie -n 10 --fetch_comments
python crawler.py profile -u cal_foodie -o ./output
python crawler.py hashtag -t taiwan -o ./output
python crawler.py hashtag -t taiwan -o ./output --fetch_details
python crawler.py posts -u cal_foodie -n 100 -o ./output # deprecated
π¦ MORE USAGE :
1) Choose mode posts, you will get url, caption, first photo for each post; choose mode posts_full, you will get url, caption, all photos, time, comments, number of likes and views for each posts. Mode posts_full will take way longer than mode posts. [posts is deprecated.
2) For the recent posts, there is no quick way to get the post caption]
3) Return default 100 hashtag posts(mode: hashtag) and all user's posts(mode: posts) if not specifying the number of post -n, --number.
4) Print the result to the console if not specifying the output path of post -o, --output.
5) It takes much longer to get data if the post number is over about 1000 since Instagram has set up the rate limit for data request.
6) Don't use this repo crawler Instagram if the user has more than 10000 posts.
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Methode 2020 for Get Instagram posts/profile/hashtag data without using Instagram API. crawler.py
Like posts automatically. liker.py
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
1) Make sure you have Chrome browser installed.
2) Download chromedriver and put it into bin folder: ./inscrawler/bin/chromedriver
> https://sites.google.com/a/chromium.org/chromedriver/
3) Install Selenium: pip3 install -r requirements.txt
4) cp inscrawler/secret.py.dist inscrawler/secret.py
E X A M P L E S :
python crawler.py posts_full -u cal_foodie -n 100 -o ./output
python crawler.py posts_full -u cal_foodie -n 10 --fetch_likers --fetch_likes_plays
python crawler.py posts_full -u cal_foodie -n 10 --fetch_comments
python crawler.py profile -u cal_foodie -o ./output
python crawler.py hashtag -t taiwan -o ./output
python crawler.py hashtag -t taiwan -o ./output --fetch_details
python crawler.py posts -u cal_foodie -n 100 -o ./output # deprecated
π¦ MORE USAGE :
1) Choose mode posts, you will get url, caption, first photo for each post; choose mode posts_full, you will get url, caption, all photos, time, comments, number of likes and views for each posts. Mode posts_full will take way longer than mode posts. [posts is deprecated.
2) For the recent posts, there is no quick way to get the post caption]
3) Return default 100 hashtag posts(mode: hashtag) and all user's posts(mode: posts) if not specifying the number of post -n, --number.
4) Print the result to the console if not specifying the output path of post -o, --output.
5) It takes much longer to get data if the post number is over about 1000 since Instagram has set up the rate limit for data request.
6) Don't use this repo crawler Instagram if the user has more than 10000 posts.
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from UNDERCODE COMMUNITY
Where to upload a PHP web mailer.pdf
685.6 KB
Forwarded from UNDERCODE NEWS
The US 5G network speed lags behind the world: the average downlink is only 50.9Mb/s
#technologie
#technologie
Forwarded from Backup Legal Mega
More than 6K $
Another 4 tb Money Making Courses
Ebooks
Cracking
Carding
Wifi hacking
Accounts ...
and many others:
#REPOSTED
https://mega.nz/folder/TjAGxSLD#pi9cuU55Kqze_7v9tzsMHQ
Another 4 tb Money Making Courses
Ebooks
Cracking
Carding
Wifi hacking
Accounts ...
and many others:
#REPOSTED
https://mega.nz/folder/TjAGxSLD#pi9cuU55Kqze_7v9tzsMHQ
mega.nz
File folder on MEGA
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦youtube-bot topic 2020 :
(views)
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Run following commands in the terminal:
1) curl -fs https://gitlab.com/DeBos/mpt/raw/master/mpt.sh
> sh -s install "git python"
2) git clone https://gitlab.com/DeBos/ytviewer.git
3) cd ytviewer
4) make
then
Run following command in the command prompt or the terminal:
5) python main.py [-h] [-u URL|FILE] [-p N] [-B firefox|chrome] [-P FILE] [-R REFERER|FILE] [-U USER_AGENT|FILE
for more usage visit https://github.com/DeBos99/ytviewer#usage
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦youtube-bot topic 2020 :
(views)
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½ & π π π½ :
Run following commands in the terminal:
1) curl -fs https://gitlab.com/DeBos/mpt/raw/master/mpt.sh
> sh -s install "git python"
2) git clone https://gitlab.com/DeBos/ytviewer.git
3) cd ytviewer
4) make
then
Run following command in the command prompt or the terminal:
5) python main.py [-h] [-u URL|FILE] [-p N] [-B firefox|chrome] [-P FILE] [-R REFERER|FILE] [-U USER_AGENT|FILE
for more usage visit https://github.com/DeBos99/ytviewer#usage
enjoyβ€οΈππ»
β git 2020
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#Fasttips Finding Files Matching a Pattern :
Use the File::glob() method.
$log_files = File::glob('/test/*.log');
if ($log_files === false)
{
die("An error occurred.");
}
You can also pass flags to the method.
$dir_list = File::glob('/test/*', GLOB_ONLYDIR);
if ($dir_files === false)
{
die("An error occurred.");
}
Valid flags are:
GLOB_MARK β Adds a slash to each directory returned
GLOB_NOSORT β Return files as they appear in the directory (no sorting)
GLOB_NOCHECK β Return the search pattern if no files matching it were found
GLOB_NOESCAPE β Backslashes do not quote meta-characters
GLOB_BRACE β Expands {a,b,c} to match βaβ, βbβ, or βcβ
GLOB_ONLYDIR β Return only directory entries which match the pattern
GLOB_ERR β Stop on read errors (like unreadable directories), by default errors are ignored.
Returns an empty array if no files are matched or a false on error.
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#Fasttips Finding Files Matching a Pattern :
Use the File::glob() method.
$log_files = File::glob('/test/*.log');
if ($log_files === false)
{
die("An error occurred.");
}
You can also pass flags to the method.
$dir_list = File::glob('/test/*', GLOB_ONLYDIR);
if ($dir_files === false)
{
die("An error occurred.");
}
Valid flags are:
GLOB_MARK β Adds a slash to each directory returned
GLOB_NOSORT β Return files as they appear in the directory (no sorting)
GLOB_NOCHECK β Return the search pattern if no files matching it were found
GLOB_NOESCAPE β Backslashes do not quote meta-characters
GLOB_BRACE β Expands {a,b,c} to match βaβ, βbβ, or βcβ
GLOB_ONLYDIR β Return only directory entries which match the pattern
GLOB_ERR β Stop on read errors (like unreadable directories), by default errors are ignored.
Returns an empty array if no files are matched or a false on error.
β β β Uππ»βΊπ«Δπ¬πβ β β β
Forwarded from Backup Legal Mega
More than 1 Million tutorial :)
https://mega.nz/folder/wcRAzKSY#ijvPGnJJ8unV4o9wiE1trQ
Enjoy β€οΈππ»
https://mega.nz/folder/wcRAzKSY#ijvPGnJJ8unV4o9wiE1trQ
Enjoy β€οΈππ»
mega.nz
2.17 GB folder on MEGA
215 files
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Crawl and analyze a fileCrawl and analyze a file :
crawl and analyze a file is very simple. This tutorial will lead you step by step to implement it through an example. let's start!
1) First of all, we must first decide the URL address we will crawl. It can be set in the script or passed through $QUERY_STRING. For simplicity, let's set the variables directly in the script.
<?
$url = 'http://www.php.net' ;
?> In the
second step, we grab the specified file and store it in an array through the file() function.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
?>
2) Okay, now there are files in the array. However, the text we want to analyze may not be all on one line. To understand this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve it. If you want to use explode (array of string variables) later, it may be better to set x to "|" or "!" or other similar separators. But for our purposes, it is best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().
<?
$url = 'http:;
$lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
?>
3) Now, the crawling work is finished, and itβs time to analyze it. For the purpose of this example, we want to get everything between <head> and </head>. In order to analyze the string, we also need something called a regular expression.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" , $lines_string ,$head );
?>
4) Let's take a look at the code. As you can see, the eregi() function is executed in the following format:
eregi("<head>(.*)</head>", $lines_string, $head);
"(.*)" means everything and can be explained For, "Analyze everything between <head> and </head>". $lines_string is the string we are analyzing, and $head is the array where the analysis result is stored.
5) Finally, we can input data. Because there is only one instance between <head> and </head>, we can safely assume that there is only one element in the array, and it is what we want. Let's print it out.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" ,);
echo $head [ 0 ];
?>
6) This is all the code.
<?php
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
preg_match_all ( "/<body([^>] .+?)>(.*)<\/body>/is" , $lines_string , $m );
echo "<xmp>" ;
echo $m [ 2 ][ 0 ];
?>
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦Crawl and analyze a fileCrawl and analyze a file :
crawl and analyze a file is very simple. This tutorial will lead you step by step to implement it through an example. let's start!
1) First of all, we must first decide the URL address we will crawl. It can be set in the script or passed through $QUERY_STRING. For simplicity, let's set the variables directly in the script.
<?
$url = 'http://www.php.net' ;
?> In the
second step, we grab the specified file and store it in an array through the file() function.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
?>
2) Okay, now there are files in the array. However, the text we want to analyze may not be all on one line. To understand this file, we can simply convert the array $lines_array into a string. We can use the implode(x,y) function to achieve it. If you want to use explode (array of string variables) later, it may be better to set x to "|" or "!" or other similar separators. But for our purposes, it is best to set x to a space. y is another necessary parameter because it is the array you want to process with implode().
<?
$url = 'http:;
$lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
?>
3) Now, the crawling work is finished, and itβs time to analyze it. For the purpose of this example, we want to get everything between <head> and </head>. In order to analyze the string, we also need something called a regular expression.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" , $lines_string ,$head );
?>
4) Let's take a look at the code. As you can see, the eregi() function is executed in the following format:
eregi("<head>(.*)</head>", $lines_string, $head);
"(.*)" means everything and can be explained For, "Analyze everything between <head> and </head>". $lines_string is the string we are analyzing, and $head is the array where the analysis result is stored.
5) Finally, we can input data. Because there is only one instance between <head> and </head>, we can safely assume that there is only one element in the array, and it is what we want. Let's print it out.
<?
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
eregi ( "<head>(.*)</ head>" ,);
echo $head [ 0 ];
?>
6) This is all the code.
<?php
$url = 'http :
//www.php.net ' ; $lines_array = file ( $url );
$lines_string = implode ( '' , $lines_array );
preg_match_all ( "/<body([^>] .+?)>(.*)<\/body>/is" , $lines_string , $m );
echo "<xmp>" ;
echo $m [ 2 ][ 0 ];
?>
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
Using Powershell to programmatically run nmap scans.pdf
252 KB
The resulting script could only be described as a quick hack, about ten lines of PowerShell to read a text le and iterate over each line, running the required nmap command and checking to make sure that the XML le actually saved.
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦FULL AIRMON-NG TUTORIAL :
(KALI-PARROT-UBUNTU...)
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦FULL AIRMON-NG TUTORIAL :
(KALI-PARROT-UBUNTU...)
wireless cards to turn on monitor mode:enjoyβ€οΈππ»
For this purpose, we will use the POSIX sh script specifically designed to carry out this function:
$ sudo airmon-ng --help
$usage: airmon-ng <start|stop|check> <interface> [channel or frequency]
See the interfaceβs status
To view the interfaceβs status, type the following command into the terminal:
$ sudo airmon-ng
Kill background processes
Use the following syntax to check if any program is running in the background
$ sudo airmon-ng check
You can also terminate any process that you think is interfering with airmon_ng or taking up memory using:
$ sudo airmon-ng check kill
How to enable Monitor Mode using Airmon-ng
If you have tried enabling monitor mode by using iw and failed, then the good idea is to try to enable monitor mode by using a different method.
The first step is to get the information about your wireless interface
$ sudo airmon-ng
Of course, you would like to kill any process that can interfere with using the adapter in monitor mode. To do that, you can use a program called airmon-ng or else use the following command.
$ sudo airmon-ng check
$ sudo airmon-ng check kill
Now we can enable the Monitor Mode without any interference.
$ sudo airmon-ng start wlan0
Wlan0mon is created.
$ sudo iwconfig
Now, you can use the following commands to disable the monitor mode and return to the managed mode.
$ sudo airmon-ng stop wlan0mon
Follow the command to restart the network manager.
$ sudo systemctl start NetworkManager
How to turn off the NetworkManager that prevents Monitor Mode
$ sudo systemctl stop NetworkManager
@undercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#requested How to Add a New Column to Existing Table in a Migration ?
I've had problems I couldn't connect a new section to my users list. Can't seem to understand it.
I tried to edit the migration file usingβ¦
<?php
public function up()
{
Schema::create('users', function ($table) {
$table->integer("paid");
});
}
In terminal, I execute php artisan migrate:install and migrate.
π¦How do I add new columns?
Solution :
You cannot update any migrations that have been migrated already. If it is added to the migrations table already, it wont process it again. Your solution is to create a new migration, for which you may use the migrate:make command on the Artisan CLI. Use a specific name to avoid clashing with existing models
1) for Laravel 5+:
php artisan make:migration addpaidtouserstable --table=users
You will be using the Schema::table() method (since youβre accessing the existing table, and not creating a new one). And you can add a column like this:
public function up()
{
Schema::table('users', function($table) {
$table->integer('paid');
});
}
2) and donβt forget to add the rollback option:
public function down()
{
Schema::table('users', function($table) {
$table->dropColumn('paid');
});
}
3) Then you can run your migrations:
> php artisan migrate
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦#requested How to Add a New Column to Existing Table in a Migration ?
I've had problems I couldn't connect a new section to my users list. Can't seem to understand it.
I tried to edit the migration file usingβ¦
<?php
public function up()
{
Schema::create('users', function ($table) {
$table->integer("paid");
});
}
In terminal, I execute php artisan migrate:install and migrate.
π¦How do I add new columns?
Solution :
You cannot update any migrations that have been migrated already. If it is added to the migrations table already, it wont process it again. Your solution is to create a new migration, for which you may use the migrate:make command on the Artisan CLI. Use a specific name to avoid clashing with existing models
1) for Laravel 5+:
php artisan make:migration addpaidtouserstable --table=users
You will be using the Schema::table() method (since youβre accessing the existing table, and not creating a new one). And you can add a column like this:
public function up()
{
Schema::table('users', function($table) {
$table->integer('paid');
});
}
2) and donβt forget to add the rollback option:
public function down()
{
Schema::table('users', function($table) {
$table->dropColumn('paid');
});
}
3) Then you can run your migrations:
> php artisan migrate
β β β Uππ»βΊπ«Δπ¬πβ β β β
Gain access to unsecured IP cameras with these Google dorks.txt
1.4 KB
Gain access to unsecured IP cameras with these Google dorks
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦TwitterBOT :
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½
Connecting to Twitter :
1) Register a Twitter account and also get its "app info".
Twitter doesn't allow you to register multiple twitter accounts on the same email address. I recommend you create a brand new email address (perhaps using Gmail) for the Twitter account. Once you register the account to that email address, wait for the confirmation email.
2) Now go here and log in as the Twitter account for your bot:
3) Fill up the form and submit.
Next once the submission completes you will be taken to a page which has the
6) Now type the following in the command line in your project directory:
node bot.js
7) Hopefully at this point you see a message like "Success! Check your bot, it should have retweeted something." Ok it won't say that, you have to code that in. Its simple as
π π π½ :
1) git clone https://github.com/nisrulz/twitterbot-nodejs.git
2) Run
npm install
enjoyβ€οΈππ»
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
π¦TwitterBOT :
πΈπ½π π π°π»π»πΈπ π°π πΈπΎπ½
Connecting to Twitter :
1) Register a Twitter account and also get its "app info".
Twitter doesn't allow you to register multiple twitter accounts on the same email address. I recommend you create a brand new email address (perhaps using Gmail) for the Twitter account. Once you register the account to that email address, wait for the confirmation email.
2) Now go here and log in as the Twitter account for your bot:
3) Fill up the form and submit.
Next once the submission completes you will be taken to a page which has the
tab : Update details here4) Use the generated tokens in the "Key and Access Token" tab to fill the fields under the config.js file in your app directory. It should look like this:
"Permissons" tab : Enable Read and Write
"Key and Access Token" tab : Click on Create my access token.
= {
consumer_key: 'blah',
consumer_secret: 'blah',
access_token: 'blah',
access_token_secret: 'blah'
}
5) Update the code under bot.js , with the your values. Best of all modify the code, tinker with it.6) Now type the following in the command line in your project directory:
node bot.js
7) Hopefully at this point you see a message like "Success! Check your bot, it should have retweeted something." Ok it won't say that, you have to code that in. Its simple as
Check your bot, it should have retweeted something.");8) Check the Twitter account for your bot, and it should have retweeted a tweet with the provided hashtag.
π π π½ :
1) git clone https://github.com/nisrulz/twitterbot-nodejs.git
2) Run
npm install
enjoyβ€οΈππ»
@UndercodeTesting
@UndercodeHacking
@UndercodeSecurity
β β β Uππ»βΊπ«Δπ¬πβ β β β
GitHub
GitHub - nisrulz/twitterbot-nodejs: [Bot] A twitter bot made using nodejs which can post tweets, retweet other tweets and possiblyβ¦
[Bot] A twitter bot made using nodejs which can post tweets, retweet other tweets and possibly fav tweets - nisrulz/twitterbot-nodejs