UNDERCODE COMMUNITY
2.7K subscribers
1.24K photos
31 videos
2.65K files
81.2K links
🦑 Undercode World!
@UndercodeCommunity


1️⃣ World first platform which Collect & Analyzes every New hacking method.
+ Pratice
@Undercode_Testing

2️⃣ Cyber & Tech NEWS:
@Undercode_News

3️⃣ CVE @Daily_CVE


Youtube.com/Undercode
by Undercode.help
Download Telegram
paid pdfs🦑
Forwarded from Backup Legal Mega
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Telenet The Secret Exposed :

For years, people and myself, have offtend tried to"work telenet unto a coma"..
With no success, for the past few years, i have gathered data, and finally
know the system, its faults, capabilities, and errors.
This really should be in a text file, but. i wish this information to
be reserved for the few users on this system:


🦑before start, here are a few basic commands to get famialir with:

Execution syntax of command function
------------------------------------------------------------------------

Connect c (sp) Connects to a host (opt)

Status stat Displays network port add

Full-Duplex full network echo

Half-Duplex half Termnial echo

Mail
or
Telemail mail telemail telemail

set Parmaters set (sp) 2:0,3:2 Select Pad Parameters

Read Paramaters par? par?(sp)2:0,3:2 display pad

Set and read
Paramaters set?(sp)2:0,3:2

escape escape from data modew

File Trasnfer dtape Prepares network for bulk

continue cont

disconnect bye or d

hang up hangup

terminial term(sp)d1 Set TERM

test

test(sp)char


test(sp)echo


test(sp)triangle


this is the end of the commands, view next msg for useage:

Trap and pipe x.25 prot. (telenet)...

Please note this is a very difficult transaction... The following
flow chart, will only work on a machine with atleast 10 Mhz..
However, an account on a unix, with cu capabilities will also work..

Package networking, is exactly what it means..
before, i go into detail, let me give you and over view...



-------------
Host
-------------
!
!
!
!
-----------------
telenet, remote
$ divertor, and
pacakge.
------------------
!
!
---------------------
! ! ! !
! ! ! !
u u u u
s s s s
e e e e
r r r r
s s s s

▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁
Forwarded from Backup Legal Mega
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Telenet The Secret Exposed 2

If you notice carefully, there is online to the host and 4 users.That is how its packaged, for instance the first 100 mills. will be from user
on then two etc..

> The way telenet can tell which is user is which, is
simply by the time. Time is of the essense. data is constantly been
packed, anywhere from 100 mils. to 760 mils. The trick to trap tapping and piping, a lead off of telenet, is to have as system running four
proccess and the same time, and have a master prgm.

> that switch's at
the appropriate delays... As you can see this is where a 10 Mhz +
system, is needed.

🦑On the host end.

The host end consists of three things..

1) 9600 baud modem

2) a dedicated telcue line

3) a network pad..

@UndercodeTesting
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁
Forwarded from Backup Legal Mega
This media is not supported in your browser
VIEW IN TELEGRAM
Forwarded from Backup Legal Mega
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Tutorial for installing Ubuntu20.04 and installing NVIDIA driver :


🦑𝕃𝔼𝕋'𝕊 𝕊𝕋𝔸ℝ𝕋 :

1) Boot and press F2 to enter BIOS

2) security boot setting disable

3) Build pytouch

> Install miniconda3,
> conda create -n pytouch python = 3.7
>conda activate pytouch
>conda config --add channels https://>mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
>conda install pytorch = 0.4 .1 torchvision cuda90

🦑as example a gd chineese graphical card
Forwarded from Backup Legal Mega
Forwarded from Backup Legal Mega
🦑Install pycharm

> Click tools-> create desktop entry to directly generate shortcut keys.
> Set the compiler to
fill in the code for pytouch. The test uses GPU:
> import torch>
>flag = torch.cuda.is_available()
> print(flag)

🦑ngpu= 1
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
print(device)
print(torch.cuda.get_device_name(0))
print(torch.rand(3,3).cuda())
# True
# cuda:0
# GeForce GTX 1060
# tensor([[0.5772, 0.5287, 0.0946],
# [0.9525, 0.7855, 0.1391],
# [0.6858, 0.5143, 0.8188]], device='cuda:0')

🦑Install tensorflow14

import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import warnings
warnings.filterwarnings("ignore")
hello=tf.constant("Hello,Tensorflow")
print(hello)
a=tf.constant([1.0,2.0]) #定义常数
b=tf.constant([3.4,4.0])
result1=a+b
print("a+b=",result1)
c=tf.constant([[3.0],[1.4]])
result2=a+c
sess=tf.Session()
print("result1:",result1)print(sess.run(result1))
print("result2:",result2)
print(sess.run(result2))
print(sess.run(hello))except:
print("Exception")
finally:
sess.close()
Forwarded from Backup Legal Mega
Forwarded from Backup Legal Mega
🦑Tutorial for installing Ubuntu20.04 and installing NVIDIA driver FULL
Forwarded from Backup Legal Mega
FOR AUTOMATE INSTALL >
Forwarded from Backup Legal Mega
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Tutorial for AUTOMATE installing Ubuntu20.04 and installing NVIDIA driver FULL :

1) open terminal and type

$ ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C03sv00001043sd000085ABbc03sc00i00
vendor : NVIDIA Corporation
model : xxyy
driver : xxxyy

2) $ sudo ubuntu-drivers autoinstall

3) $ sudo apt install nvidia-driver-440

4) $ sudo reboot

🦑Methode 2 :

1) type in terminal

$ lshw -numeric -C display
or
$ lspci -vnn | grep VGA
or
$ ubuntu-drivers devices

2) $ ls
NVIDIA-Linux-x86_64-440.44.run

3) $ sudo apt install build-essential libglvnd-dev pkg-config

4) $ sudo telinit 3

5) $ sudo bash NVIDIA-Linux-x86_64-440.44.run

6) $ sudo reboot

> After reboot your should be able to start NVIDIA X Server Settings app from the Activities menu.

@UndercodeTesting
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑FROM WIKILEAK RANDOM GIT :


> The "I can never remember that alias I set" Trick


> aliases = !git config --get-regexp 'alias.*' | colrm 1 6 | sed 's/[ ]/ = /' | sort
Gitignore
$ git config --global core.excludesfile ${HOME}/.gitignore
Then create a ~/.gitignore. .gitignore follows glob syntax
The "The Git URL is too long" Trick

('excerpt-include' missing)


🦑The "I forgot something in my last commit" Trick
# first: stage the changes you want incorporated in the previous commit

# use -C to reuse the previous commit message in the HEAD
$ git commit --amend -C HEAD
# or use -m to make a new message
$ git commit --amend -m 'add some stuff and the other stuff i forgot before'

🦑The "Oh crap I didn't mean to commit yet" Trick
# undo last commit and bring changes back into staging (i.e. reset to the commit one before HEAD)
$ git reset --soft HEAD^

🦑The "That commit sucked! Start over!" Trick
# undo last commit and destroy those awful changes you made (i.e. reset to the commit one before HEAD)
$ git reset --hard HEAD^

🦑The "Oh no I should have been working in a branch" Trick
# takes staged changes and 'stashes' them for later, and reverts to HEAD.
$ git stash

# creates new branch and switches to it, then takes the stashed changes and stages them in the new branch. fancy!
$ git stash branch new-branch-name

🦑The "OK, which commit broke the build!?" Trick
# Made lots of local commits and haven't run any tests...
$ [unittest runner of choice]
# Failures... now unclear where it was broken.

# git bisect to rescue.
$ git bisect start # to initiate a bisect
$ git bisect bad # to tell bisect that the current rev is the first spot you know was broken.
$ git bisect good <some tag or rev that you knew was working>
$ git bisect run [unittest runner of choice]
# Some runs.
# BLAMO -- git shows you the commit that broke
$ git bisect reset #to exit and put code back to state before git bisect start
# Fix code. Run tests. Commit working code. Make the world a better place.

🦑The "I have merge conflicts, but I know that one version is the correct one" Trick, a.k.a. "Ours vs. Theirs"
# in master
$ git merge a_branch

🦑CONFLICT (content): Merge conflict in conflict.txt
Automatic merge failed; fix conflicts and then commit.
$ git status -s
UU conflict.txt

# we know the version of the file from the branch is the version we want.
$ git checkout --theirs conflict.txt
$ git add conflict.txt
$ git commit

# Sometimes during a merge you want to take a file from one side wholesale.

🦑 The following aliases expose the ours and theirs commands which let you
# pick a file(s) from the current branch or the merged branch respectively.
#
# N.b. the function is there as hack to get $@ doing
# what you would expect it to as a shell user.
# Add the below to your .gitconfig for easy ours/theirs aliases.
# ours = "!f() { git checkout --ours $@ && git add $@; }; f"
# theirs = "!f() { git checkout --theirs $@ && git add $@; }; f"

🦑The "Workaround Self-signed Certificates" Trick
This trick should no longer be necessary for using Stash, so long as you have the certificate for DEVLAN Domain Controller Certificate Authority installed.

# Issue: When attempting to clone (or any other command that interacts with the remote server) git by default validates
# the presented SSL certificate by the server. Our server's certificate is not valid and therefore git exits out with an error.
# Resolution(Linux): For a one time fix, you can use the env command to create an environment variable of GIT_SSL_NO_VERIFY=TRUE.
$ env GIT_SSL_NO_VERIFY=TRUE git <command> <arguments>
🦑 If you don't want to do this all the time, you can change your git configuration:
$ git config --global http.sslVerify false
Split a subdirectory into a new repository/project
$ git clone ssh://stash/proj/mcplugins.git
$ cd mcplugins
$ git checkout origin/master -b mylib
$ git filter-branch --prune-empty --subdirectory-filter plugins/mylib mylib
$ git push ssh://stash/proj/mylib.git mylib:master

🦑Local Branch Cleanup
# Delete local branches that have been merged into HEAD
$ git branch --merged | grep -v '\\*\\|master\\|develop' | xargs -n 1 git branch -d


🦑Delete local branches that have been merged into origin/master
$ git branch --merged origin/master | grep -v '\\*\\|master\\|develop' | xargs -n 1 git branch -d
# Show what local branches haven't been merged to HEAD
$ git branch --no-merged | grep -v '\\*\\|master\\|develop'



▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁
🦑FROM WIKILEAK RANDOM GIT
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Email/Whatsapp bombers Collection 2020 updated :
t.me/UndercodeTesting

1) git clone https://github.com/bhattsameer/Bombers

2) cd Bombers

3) RUN AS Python

>python Email_bomber.py for mail (verified )


> SMS_bomber.py
SMS_bomber_version2.py Update SMS_bomber_version2.py (non verified)

> numspy_bomber.py

🦑FOR WA SPAM :

git clone https://github.com/tbhaxor/whatabomb

cd whatabomb

pip install PyQt5
selenium

$ pip install selenium
Automatic Install
$ pip install -r requirements.txt

$ python bomb.py
> require wa web

▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁
This media is not supported in your browser
VIEW IN TELEGRAM
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁

🦑Binary dump is the fastest way to dump a database. Binary dump files are portable across all platforms, regardless of CPU type.

To perform a binary dump:

1) Start a server on the database to be dumped.

Presumably, since fewer users are on the system, more memory can be allocated to the shared memory pool with the (-B) startup parameter.

> Example:

$ proserve sports -B 100000


2) Start multiple binary dump sessions.

Since these operations are I/O intensive, 3 to 4 sessions per CPU is recommended. Further improvement can be obtained by dumping the data to different disks
Example:

$ proutil sports -C dump customer /disk1/temp/data
$ proutil sports -C dump invoice /disk2/temp/data



🦑To perform a binary load:

1)The binary load is usually the fastest way to load data into the database when the amount of data in the tables is large. When the amount of data in the tables is small (low count of records) the Data Dictionary dump and load may be faster as there is some overhead in parsing the binary header of the binary load file (.bd). When there are only a few records in a table the overhead associated with parsing the header takes longer to load the table data than the Data Dictionary / Data Administration tool

2)Start the database multi-user with no integrity (-i). Understanding that should any error occurs while using the -i flag, there is no ability for the database to perform crash recovery to undo the operations, in which case the load needs to be re-base-lined.

3) Start multiple load sessions, one session per Storage Area
When an Enterprise Database License is in use, start a Before-Image Writer (BIW) and 2- 4 Asynchronous Page Writers (APW's).
The best database block size is 8K, provided records-per-block have been considered.

4) Several Articles discuss building scripts for the binary load:

000021664, How to perform a binary dump and load?
000011828, How to generate scripts to run binary dump and load for all tables?

5)When loading to Type I storage areas, binary load tables with the s mallest records first and run one binary load per storage area to not cause fragmentation during the load.

6) This is no longer as important with the advent of Type II Storage Areas in OpenEdge 10 which are the preferred and recommended storage area structure. Refer to Article: 000022209, Does loading small records first still affect fragmentation in the Type II Storage Area architecture ?

7)Use PROUTIL <db> -C TABANALYS to determine the table(s) with the Smallest records.

8) This strategy reduces scatter because Progress loads as many records as it can into a given block before it moves on to the next when either the records per block or the space in the block are exhausted

9) If larger records are loaded first, there are very likely to be blocks with enough record slots remaining and block space to fit smaller records. This lea)ds to fragmentation because the small records are scattered throughout the area.

10) If only one table needs to be dumped and loaded, binary load it to a new storage area by also reloading the schema definition for this table and its indexes. Loading the table back into a type I storage area will not improve the scatter factor as records will mainly be loaded into the remaining space they left when deleted.



Powered by wiki
▁ ▂ ▄ u𝕟𝔻Ⓔ𝐫Ć𝔬𝓓ⓔ ▄ ▂ ▁