2. Query file status
After the front end to get MD5 file, query whether there is a name from the background MD5of the folder, if present, are listed in the folder and all files, get a list of sections have been uploaded, and if not, the list of already uploaded slice is empty.
After the front end to get MD5 file, query whether there is a name from the background MD5of the folder, if present, are listed in the folder and all files, get a list of sections have been uploaded, and if not, the list of already uploaded slice is empty.
checkFileMD5 (file, fileName, fileMd5Value, onError) {
const fileSize = file.size
const { chunkSize, uploadProgress } = this
this.chunks = Math.ceil(fileSize / chunkSize)
return new Promise(async (resolve, reject) => {
const params = {
fileName: fileName,
fileMd5Value: fileMd5Value,
}
const { ok, data } = await services.checkFile(params)
if (ok) {
this.hasUploaded = data.chunkList.length
uploadProgress(file)
resolve(data)
} else {
reject(ok)
onError()
}
})
}
const fileSize = file.size
const { chunkSize, uploadProgress } = this
this.chunks = Math.ceil(fileSize / chunkSize)
return new Promise(async (resolve, reject) => {
const params = {
fileName: fileName,
fileMd5Value: fileMd5Value,
}
const { ok, data } = await services.checkFile(params)
if (ok) {
this.hasUploaded = data.chunkList.length
uploadProgress(file)
resolve(data)
} else {
reject(ok)
onError()
}
})
}
3. File fragmentation
The core of file upload optimization is file fragmentation. The slice method in the Blob object can cut the file. The File object inherits the Blob object, so the File object also has a slice method.
Define the size variable of each slice file as chunkSize, get the number of slice chunks from the file size FileSize and the slice size chunkSize, use the for loop and file.slice () method to slice the file, the sequence number is 0-n, and The uploaded slice list is compared to get all unuploaded slices and pushed to the request list requestList.
The core of file upload optimization is file fragmentation. The slice method in the Blob object can cut the file. The File object inherits the Blob object, so the File object also has a slice method.
Define the size variable of each slice file as chunkSize, get the number of slice chunks from the file size FileSize and the slice size chunkSize, use the for loop and file.slice () method to slice the file, the sequence number is 0-n, and The uploaded slice list is compared to get all unuploaded slices and pushed to the request list requestList.
async checkAndUploadChunk (file, fileMd5Value, chunkList) {
let { chunks, upload } = this
const requestList = []
for (let i = 0; i < chunks; i++) {
let exit = chunkList.indexOf(i + '') > -1
if (!exit) {
requestList.push(upload(i, fileMd5Value, file))
}
}
console.log({ requestList })
const result =
requestList.length > 0
? await Promise.all(requestList)
.then(result => {
console.log({ result })
return result.every(i => i.ok)
})
.catch(err => {
return err
})
: true
console.log({ result })
return result === true
}
let { chunks, upload } = this
const requestList = []
for (let i = 0; i < chunks; i++) {
let exit = chunkList.indexOf(i + '') > -1
if (!exit) {
requestList.push(upload(i, fileMd5Value, file))
}
}
console.log({ requestList })
const result =
requestList.length > 0
? await Promise.all(requestList)
.then(result => {
console.log({ result })
return result.every(i => i.ok)
})
.catch(err => {
return err
})
: true
console.log({ result })
return result === true
}
4. Upload fragments
Call Promise.all to upload all the slices concurrently, and pass the slice number, slice file, and file MD5 to the background.
Backstage after receiving the upload request, first check the name ζδ»Ά MD5of the folder exists, does not exist, create the folder, and then fs-extrarename the method will move slice sliced from the temporary folder path, as follows:
Call Promise.all to upload all the slices concurrently, and pass the slice number, slice file, and file MD5 to the background.
Backstage after receiving the upload request, first check the name ζδ»Ά MD5of the folder exists, does not exist, create the folder, and then fs-extrarename the method will move slice sliced from the temporary folder path, as follows:
π¦When all the pieces are uploaded successfully, the server is notified to merge, and when one piece fails to upload, it prompts "Upload failed". When re-uploading, the file upload status is obtained through the file MD5. When the server already has a slice corresponding to the MD5, it means that the slice has already been uploaded, and there is no need to upload again. When the server cannot find the slice corresponding to the MD5, it means that The slice needs to be uploaded. The user only needs to upload this part of the slice, and the entire file can be uploaded in full. This is the breakpoint resume of the file.
upload (i, fileMd5Value, file) {
const { uploadProgress, chunks } = this
return new Promise((resolve, reject) => {
let { chunkSize } = this
////FormData HTML5
let end =
(i + 1) * chunkSize >= file.size ? file.size : (i + 1) * chunkSize
let form = new FormData()
form.append('data', file.slice(i * chunkSize, end)) //
form.append('total', chunks) //
form.append('index', i) //
form.append('fileMd5Value', fileMd5Value)
services
.uploadLarge(form)
.then(data => {
if (data.ok) {
this.hasUploaded++
uploadProgress(file)
}
console.log({ data })
resolve(data)
})
.catch(err => {
reject(err)
})
})
}
const { uploadProgress, chunks } = this
return new Promise((resolve, reject) => {
let { chunkSize } = this
////FormData HTML5
let end =
(i + 1) * chunkSize >= file.size ? file.size : (i + 1) * chunkSize
let form = new FormData()
form.append('data', file.slice(i * chunkSize, end)) //
form.append('total', chunks) //
form.append('index', i) //
form.append('fileMd5Value', fileMd5Value)
services
.uploadLarge(form)
.then(data => {
if (data.ok) {
this.hasUploaded++
uploadProgress(file)
}
console.log({ data })
resolve(data)
})
.catch(err => {
reject(err)
})
})
}
5. Upload progress
Although the bulk upload of shards is much faster than the single upload of large files, there is still a period of loading time. At this time, a prompt of the upload progress should be added to display the progress of the file upload in real time.
The native Javascript XMLHttpRequest provides a progress event, which returns the uploaded size and total size of the file. Project uses axios of ajax encapsulated may increase in the config onUploadProgressmethod, monitor file upload progress.
Although the bulk upload of shards is much faster than the single upload of large files, there is still a period of loading time. At this time, a prompt of the upload progress should be added to display the progress of the file upload in real time.
The native Javascript XMLHttpRequest provides a progress event, which returns the uploaded size and total size of the file. Project uses axios of ajax encapsulated may increase in the config onUploadProgressmethod, monitor file upload progress.
const config = {
onUploadProgress: progressEvent => {
var complete = (progressEvent.loaded / progressEvent.total * 100 | 0) + '%'
}
}
services.uploadChunk(form, config)
onUploadProgress: progressEvent => {
var complete = (progressEvent.loaded / progressEvent.total * 100 | 0) + '%'
}
}
services.uploadChunk(form, config)
6. Merge shards
After uploading all the file fragments, the front end actively informs the server to merge. When the server receives this request, it actively merges the slices and finds the folder with the same name in the file upload path of the server through the file MD5. As can be seen from the above, the file fragments are named according to the fragment sequence number, and the fragment upload interface is asynchronous, and there is no guarantee that the slices received by the server are spliced ββin the order requested. So it should be before the segment file merge folder, and sorted according to file name, and then by concat-filesmerging fragmented files get uploaded files from users. So far, the large file upload is complete.
After uploading all the file fragments, the front end actively informs the server to merge. When the server receives this request, it actively merges the slices and finds the folder with the same name in the file upload path of the server through the file MD5. As can be seen from the above, the file fragments are named according to the fragment sequence number, and the fragment upload interface is asynchronous, and there is no guarantee that the slices received by the server are spliced ββin the order requested. So it should be before the segment file merge folder, and sorted according to file name, and then by concat-filesmerging fragmented files get uploaded files from users. So far, the large file upload is complete.
exports.merge = {
validate: {
query: {
fileName: Joi.string()
.trim()
.required()
.description(
md5: Joi.string()
.trim()
.required()
.description(md5'),
size: Joi.string()
.trim()
.required()
.description('ζδ»Άε€§ε°'),
},
},
permission: {
roles: ['user'],
},
async handler (ctx) {
const { fileName, md5, size } = ctx.request.query
let { name, base: filename, ext } = path.parse(fileName)
const newFileName = randomFilename(name, ext)
await mergeFiles(path.join(uploadDir, md5), uploadDir, newFileName, size)
.then(async () => {
const file = {
key: newFileName,
name: filename,
mime_type: mime.getType(
ext,
path:
provider: 'oss',
size,
owner: ctx.state.user.id,
}
const key = encodeURIComponent(file.key)
.replace(/%/g, '')
.slice(-100)
file.url = await uploadLocalFileToOss(file.path, key)
file.url = getFileUrl(file)
const f = await File.create(omit(file, 'path'))
const files = []
files.push(f)
ctx.body = invokeMap(files, 'toJSON')
})
.catch(() => {
throw Boom.badData
})
},
}
validate: {
query: {
fileName: Joi.string()
.trim()
.required()
.description(
md5: Joi.string()
.trim()
.required()
.description(md5'),
size: Joi.string()
.trim()
.required()
.description('ζδ»Άε€§ε°'),
},
},
permission: {
roles: ['user'],
},
async handler (ctx) {
const { fileName, md5, size } = ctx.request.query
let { name, base: filename, ext } = path.parse(fileName)
const newFileName = randomFilename(name, ext)
await mergeFiles(path.join(uploadDir, md5), uploadDir, newFileName, size)
.then(async () => {
const file = {
key: newFileName,
name: filename,
mime_type: mime.getType(
${uploadDir}/${newFileName}),ext,
path:
${uploadDir}/${newFileName},provider: 'oss',
size,
owner: ctx.state.user.id,
}
const key = encodeURIComponent(file.key)
.replace(/%/g, '')
.slice(-100)
file.url = await uploadLocalFileToOss(file.path, key)
file.url = getFileUrl(file)
const f = await File.create(omit(file, 'path'))
const files = []
files.push(f)
ctx.body = invokeMap(files, 'toJSON')
})
.catch(() => {
throw Boom.badData
})
},
}
π¦ to sum up
This UNDERCODE expert tutorial @undercodetesting
@undercodecourses
describes some of the practices for optimizing the upload of large-format files. It is summarized as the following 4 points:
1) ob.slice slices the file and uploads multiple slices concurrently. After all slices are uploaded, the server is notified to merge to realize the large file slice upload;
2) The native XMLHttpRequest onprogress monitors the upload progress of the slice and obtains the file upload progress in real time;
3) spark-md5 calculates the file MD5 according to the content of the file, gets the unique identifier of the file, and binds it to the file upload status;
4) Before uploading the slices, check the uploaded slice list through the file MD5. Only the slices that have not been uploaded are uploaded during the upload to realize the resuming of the breakpoint.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
This UNDERCODE expert tutorial @undercodetesting
@undercodecourses
describes some of the practices for optimizing the upload of large-format files. It is summarized as the following 4 points:
1) ob.slice slices the file and uploads multiple slices concurrently. After all slices are uploaded, the server is notified to merge to realize the large file slice upload;
2) The native XMLHttpRequest onprogress monitors the upload progress of the slice and obtains the file upload progress in real time;
3) spark-md5 calculates the file MD5 according to the content of the file, gets the unique identifier of the file, and binds it to the file upload status;
4) Before uploading the slices, check the uploaded slice list through the file MD5. Only the slices that have not been uploaded are uploaded during the upload to realize the resuming of the breakpoint.
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ This expert tut is written by undercode
> don't clone our tutorials
> support & share
> don't clone our tutorials
> support & share