π¦When all the pieces are uploaded successfully, the server is notified to merge, and when one piece fails to upload, it prompts "Upload failed". When re-uploading, the file upload status is obtained through the file MD5. When the server already has a slice corresponding to the MD5, it means that the slice has already been uploaded, and there is no need to upload again. When the server cannot find the slice corresponding to the MD5, it means that The slice needs to be uploaded. The user only needs to upload this part of the slice, and the entire file can be uploaded in full. This is the breakpoint resume of the file.
upload (i, fileMd5Value, file) {
const { uploadProgress, chunks } = this
return new Promise((resolve, reject) => {
let { chunkSize } = this
////FormData HTML5
let end =
(i + 1) * chunkSize >= file.size ? file.size : (i + 1) * chunkSize
let form = new FormData()
form.append('data', file.slice(i * chunkSize, end)) //
form.append('total', chunks) //
form.append('index', i) //
form.append('fileMd5Value', fileMd5Value)
services
.uploadLarge(form)
.then(data => {
if (data.ok) {
this.hasUploaded++
uploadProgress(file)
}
console.log({ data })
resolve(data)
})
.catch(err => {
reject(err)
})
})
}
const { uploadProgress, chunks } = this
return new Promise((resolve, reject) => {
let { chunkSize } = this
////FormData HTML5
let end =
(i + 1) * chunkSize >= file.size ? file.size : (i + 1) * chunkSize
let form = new FormData()
form.append('data', file.slice(i * chunkSize, end)) //
form.append('total', chunks) //
form.append('index', i) //
form.append('fileMd5Value', fileMd5Value)
services
.uploadLarge(form)
.then(data => {
if (data.ok) {
this.hasUploaded++
uploadProgress(file)
}
console.log({ data })
resolve(data)
})
.catch(err => {
reject(err)
})
})
}
5. Upload progress
Although the bulk upload of shards is much faster than the single upload of large files, there is still a period of loading time. At this time, a prompt of the upload progress should be added to display the progress of the file upload in real time.
The native Javascript XMLHttpRequest provides a progress event, which returns the uploaded size and total size of the file. Project uses axios of ajax encapsulated may increase in the config onUploadProgressmethod, monitor file upload progress.
Although the bulk upload of shards is much faster than the single upload of large files, there is still a period of loading time. At this time, a prompt of the upload progress should be added to display the progress of the file upload in real time.
The native Javascript XMLHttpRequest provides a progress event, which returns the uploaded size and total size of the file. Project uses axios of ajax encapsulated may increase in the config onUploadProgressmethod, monitor file upload progress.
const config = {
onUploadProgress: progressEvent => {
var complete = (progressEvent.loaded / progressEvent.total * 100 | 0) + '%'
}
}
services.uploadChunk(form, config)
onUploadProgress: progressEvent => {
var complete = (progressEvent.loaded / progressEvent.total * 100 | 0) + '%'
}
}
services.uploadChunk(form, config)
6. Merge shards
After uploading all the file fragments, the front end actively informs the server to merge. When the server receives this request, it actively merges the slices and finds the folder with the same name in the file upload path of the server through the file MD5. As can be seen from the above, the file fragments are named according to the fragment sequence number, and the fragment upload interface is asynchronous, and there is no guarantee that the slices received by the server are spliced ββin the order requested. So it should be before the segment file merge folder, and sorted according to file name, and then by concat-filesmerging fragmented files get uploaded files from users. So far, the large file upload is complete.
After uploading all the file fragments, the front end actively informs the server to merge. When the server receives this request, it actively merges the slices and finds the folder with the same name in the file upload path of the server through the file MD5. As can be seen from the above, the file fragments are named according to the fragment sequence number, and the fragment upload interface is asynchronous, and there is no guarantee that the slices received by the server are spliced ββin the order requested. So it should be before the segment file merge folder, and sorted according to file name, and then by concat-filesmerging fragmented files get uploaded files from users. So far, the large file upload is complete.
exports.merge = {
validate: {
query: {
fileName: Joi.string()
.trim()
.required()
.description(
md5: Joi.string()
.trim()
.required()
.description(md5'),
size: Joi.string()
.trim()
.required()
.description('ζδ»Άε€§ε°'),
},
},
permission: {
roles: ['user'],
},
async handler (ctx) {
const { fileName, md5, size } = ctx.request.query
let { name, base: filename, ext } = path.parse(fileName)
const newFileName = randomFilename(name, ext)
await mergeFiles(path.join(uploadDir, md5), uploadDir, newFileName, size)
.then(async () => {
const file = {
key: newFileName,
name: filename,
mime_type: mime.getType(
ext,
path:
provider: 'oss',
size,
owner: ctx.state.user.id,
}
const key = encodeURIComponent(file.key)
.replace(/%/g, '')
.slice(-100)
file.url = await uploadLocalFileToOss(file.path, key)
file.url = getFileUrl(file)
const f = await File.create(omit(file, 'path'))
const files = []
files.push(f)
ctx.body = invokeMap(files, 'toJSON')
})
.catch(() => {
throw Boom.badData
})
},
}
validate: {
query: {
fileName: Joi.string()
.trim()
.required()
.description(
md5: Joi.string()
.trim()
.required()
.description(md5'),
size: Joi.string()
.trim()
.required()
.description('ζδ»Άε€§ε°'),
},
},
permission: {
roles: ['user'],
},
async handler (ctx) {
const { fileName, md5, size } = ctx.request.query
let { name, base: filename, ext } = path.parse(fileName)
const newFileName = randomFilename(name, ext)
await mergeFiles(path.join(uploadDir, md5), uploadDir, newFileName, size)
.then(async () => {
const file = {
key: newFileName,
name: filename,
mime_type: mime.getType(
${uploadDir}/${newFileName}),ext,
path:
${uploadDir}/${newFileName},provider: 'oss',
size,
owner: ctx.state.user.id,
}
const key = encodeURIComponent(file.key)
.replace(/%/g, '')
.slice(-100)
file.url = await uploadLocalFileToOss(file.path, key)
file.url = getFileUrl(file)
const f = await File.create(omit(file, 'path'))
const files = []
files.push(f)
ctx.body = invokeMap(files, 'toJSON')
})
.catch(() => {
throw Boom.badData
})
},
}
π¦ to sum up
This UNDERCODE expert tutorial @undercodetesting
@undercodecourses
describes some of the practices for optimizing the upload of large-format files. It is summarized as the following 4 points:
1) ob.slice slices the file and uploads multiple slices concurrently. After all slices are uploaded, the server is notified to merge to realize the large file slice upload;
2) The native XMLHttpRequest onprogress monitors the upload progress of the slice and obtains the file upload progress in real time;
3) spark-md5 calculates the file MD5 according to the content of the file, gets the unique identifier of the file, and binds it to the file upload status;
4) Before uploading the slices, check the uploaded slice list through the file MD5. Only the slices that have not been uploaded are uploaded during the upload to realize the resuming of the breakpoint.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
This UNDERCODE expert tutorial @undercodetesting
@undercodecourses
describes some of the practices for optimizing the upload of large-format files. It is summarized as the following 4 points:
1) ob.slice slices the file and uploads multiple slices concurrently. After all slices are uploaded, the server is notified to merge to realize the large file slice upload;
2) The native XMLHttpRequest onprogress monitors the upload progress of the slice and obtains the file upload progress in real time;
3) spark-md5 calculates the file MD5 according to the content of the file, gets the unique identifier of the file, and binds it to the file upload status;
4) Before uploading the slices, check the uploaded slice list through the file MD5. Only the slices that have not been uploaded are uploaded during the upload to realize the resuming of the breakpoint.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ This expert tut is written by undercode
> don't clone our tutorials
> support & share
> don't clone our tutorials
> support & share
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Hack news by Undercode :
For security reasons, OpenSSH announced to abandon support for SHA-1 authentication scheme
1) OpenSSH is one of the most popular tools for connecting and managing remote servers. Recently, the team announced plans to abandon support for the SHA-1 authentication scheme. OpenSSH cited the security problems in the SHA-1 hash algorithm in the announcement, which is considered insecure in the industry. The algorithm was cracked by Google cryptography experts in February 2017 and can use SHAttered technology to sign two different files with the same SHA-1 file.
2) However, creating a SHA-1 collision at that time was considered to be very expensive. Therefore, Google experts believe that SHA-1 needs to wait at least half a year in real life until the cost further decreases. Later in the research reports released in May 2019 and January 2020, an updated method was introduced in detail to reduce the cost of the SHA-1 selection-prefix collision attack to less than $ 110,000 and less than $ 50,000.
3) For national-level and high-end cybercrime groups, let them generate an SSH authentication key, so that they can remotely access key servers without being detected, then the price of $ 50,000 is very small. The OpenSSH developer said today: "For this reason, we will disable the 'ssh-rsa' public key signature algorithm by default in a near future version."
π¦Hack news by Undercode :
For security reasons, OpenSSH announced to abandon support for SHA-1 authentication scheme
1) OpenSSH is one of the most popular tools for connecting and managing remote servers. Recently, the team announced plans to abandon support for the SHA-1 authentication scheme. OpenSSH cited the security problems in the SHA-1 hash algorithm in the announcement, which is considered insecure in the industry. The algorithm was cracked by Google cryptography experts in February 2017 and can use SHAttered technology to sign two different files with the same SHA-1 file.
2) However, creating a SHA-1 collision at that time was considered to be very expensive. Therefore, Google experts believe that SHA-1 needs to wait at least half a year in real life until the cost further decreases. Later in the research reports released in May 2019 and January 2020, an updated method was introduced in detail to reduce the cost of the SHA-1 selection-prefix collision attack to less than $ 110,000 and less than $ 50,000.
3) For national-level and high-end cybercrime groups, let them generate an SSH authentication key, so that they can remotely access key servers without being detected, then the price of $ 50,000 is very small. The OpenSSH developer said today: "For this reason, we will disable the 'ssh-rsa' public key signature algorithm by default in a near future version."
4) The OpenSSH application uses the "ssh-rsa" mode to generate SSH authentication keys. One of these keys is stored on the server that the user wants to log in to, and the other is stored in the user's local OpenSSH client, so that the user can access the server without entering a password each time they log in, but authenticates locally Key instead of login.
5) By default, OpenSSH ssh-rsa mode generates these keys by using the SHA-1 hash function, which means that these keys are vulnerable to SHAterred attacks, enabling threat actors to generate duplicate keys. OpenSSH developers said today: "Unfortunately, despite the existence of better alternatives, this algorithm is still widely used, and it is the only remaining public key signature algorithm specified by the original SSH RFCs."
6) The OpenSSH team now requires server owners to check whether their keys have been generated using the default ssh-rsa mode and use different modes to generate new keys. The OpenSSH team stated that the recommended modes are rsa-sha2-256 / 512 (supported since OpenSSH 7.2), ssh-ed25519 (supported since OpenSSH 6.5) or ecdsa-sha2-nistp256 / 384/521 (supported since OpenSSH 5.7) .
@UndercodeTesting
Future new chan @UndercodeNews
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
5) By default, OpenSSH ssh-rsa mode generates these keys by using the SHA-1 hash function, which means that these keys are vulnerable to SHAterred attacks, enabling threat actors to generate duplicate keys. OpenSSH developers said today: "Unfortunately, despite the existence of better alternatives, this algorithm is still widely used, and it is the only remaining public key signature algorithm specified by the original SSH RFCs."
6) The OpenSSH team now requires server owners to check whether their keys have been generated using the default ssh-rsa mode and use different modes to generate new keys. The OpenSSH team stated that the recommended modes are rsa-sha2-256 / 512 (supported since OpenSSH 7.2), ssh-ed25519 (supported since OpenSSH 6.5) or ecdsa-sha2-nistp256 / 384/521 (supported since OpenSSH 5.7) .
@UndercodeTesting
Future new chan @UndercodeNews
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦New updates in sniper tool
T.me/UndercodeTesting
> Automated pentest framework for offensive security experts
π¦FEATURES :
Automatically collects basic recon (ie. whois, ping, DNS, etc.)
Automatically launches Google hacking queries against a target domain
Automatically enumerates open ports via NMap port scanning
Automatically exploit common vulnerabilities
Automatically brute forces sub-domains, gathers DNS info and checks for
zone transfers
Automatically checks for sub-domain hijacking
Automatically runs targeted NMap scripts against open ports
Automatically runs targeted Metasploit scan and exploit modules
Automatically scans all web applications for common vulnerabilities
Automatically brute forces ALL open services
Automatically test for anonymous FTP access
Automatically runs WPScan, Arachni and Nikto for all web services
Automatically enumerates NFS shares
Automatically test for anonymous LDAP access
Automatically enumerate SSL/TLS ciphers, protocols and vulnerabilities
Automatically enumerate SNMP community strings, services and users
Automatically list SMB users and shares, check for NULL sessions and
exploit MS08-067
Automatically tests for open X11 servers
Performs high level enumeration of multiple hosts and subnets
Automatically integrates with Metasploit Pro, MSFConsole and Zenmap for reporting
Automatically gathers screenshots of all web sites
Create individual workspaces to store all scan output
Scheduled scans (https://github.com/1N3/Sn1per/wiki/Scheduled-Scans)
Slack API integration (https://github.com/1N3/Sn1per/wiki/Slack-API-
Integration)
Hunter.io API integration (https://github.com/1N3/Sn1per/wiki/
Hunter.io-API-Integration)
OpenVAS API integration (https://github.com/1N3/Sn1per/wiki/OpenVAS-Integration)
Burpsuite Professional 2.x integration (https://github.com/1N3/Sn1per/wiki/Burpsuite-Professional-2.x-Integration)
Shodan API integration (https://github.com/1N3/Sn1per/wiki/Shodan-Integration)
Censys API integration (https://github.com/1N3/Sn1per/wiki/Censys-API-Integration)
Metasploit integration (https://github.com/1N3/Sn1per/wiki/Metasploit-Integration)
π¦fOR THIS REASON some hackers clone a part from this script and upload to giyhub under their names...jajaj
π¦πβπππΈπππππΈπππβ & βπβ :
1) Download https://raw.githubusercontent.com/1N3/Sn1per/master/Dockerfile
2) docker build -t sn1per .
3) docker run -it sn1per /bin/bash
or
> docker pull xerosecurity/sn1per
>docker run -it xerosecurity/sn1per /bin/bash
β β @undercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦New updates in sniper tool
T.me/UndercodeTesting
> Automated pentest framework for offensive security experts
π¦FEATURES :
Automatically collects basic recon (ie. whois, ping, DNS, etc.)
Automatically launches Google hacking queries against a target domain
Automatically enumerates open ports via NMap port scanning
Automatically exploit common vulnerabilities
Automatically brute forces sub-domains, gathers DNS info and checks for
zone transfers
Automatically checks for sub-domain hijacking
Automatically runs targeted NMap scripts against open ports
Automatically runs targeted Metasploit scan and exploit modules
Automatically scans all web applications for common vulnerabilities
Automatically brute forces ALL open services
Automatically test for anonymous FTP access
Automatically runs WPScan, Arachni and Nikto for all web services
Automatically enumerates NFS shares
Automatically test for anonymous LDAP access
Automatically enumerate SSL/TLS ciphers, protocols and vulnerabilities
Automatically enumerate SNMP community strings, services and users
Automatically list SMB users and shares, check for NULL sessions and
exploit MS08-067
Automatically tests for open X11 servers
Performs high level enumeration of multiple hosts and subnets
Automatically integrates with Metasploit Pro, MSFConsole and Zenmap for reporting
Automatically gathers screenshots of all web sites
Create individual workspaces to store all scan output
Scheduled scans (https://github.com/1N3/Sn1per/wiki/Scheduled-Scans)
Slack API integration (https://github.com/1N3/Sn1per/wiki/Slack-API-
Integration)
Hunter.io API integration (https://github.com/1N3/Sn1per/wiki/
Hunter.io-API-Integration)
OpenVAS API integration (https://github.com/1N3/Sn1per/wiki/OpenVAS-Integration)
Burpsuite Professional 2.x integration (https://github.com/1N3/Sn1per/wiki/Burpsuite-Professional-2.x-Integration)
Shodan API integration (https://github.com/1N3/Sn1per/wiki/Shodan-Integration)
Censys API integration (https://github.com/1N3/Sn1per/wiki/Censys-API-Integration)
Metasploit integration (https://github.com/1N3/Sn1per/wiki/Metasploit-Integration)
π¦fOR THIS REASON some hackers clone a part from this script and upload to giyhub under their names...jajaj
π¦πβπππΈπππππΈπππβ & βπβ :
1) Download https://raw.githubusercontent.com/1N3/Sn1per/master/Dockerfile
2) docker build -t sn1per .
3) docker run -it sn1per /bin/bash
or
> docker pull xerosecurity/sn1per
>docker run -it xerosecurity/sn1per /bin/bash
β β @undercodeTesting
β β β ο½ππ»βΊπ«Δπ¬πβ β β β