These are my doubts.
As we know the steps to upload a file. User upload a file, file upload to temp folder of server. Script move the file to final folder (disk I/O action).
As your script, the script will slice up the files and these small chunks will be sent to the server one by one. These parts will be merged on the server side. This is a great method, but I have some doubts and not know.
So, if I upload a 1GB file to server, assuming it is divided into one hundred. The one hundred chunks will be merged on the server side, and move to final folder. Is it will take up a lot of memory and CPU clock to do this work on server?
Thank you :)
The script APPENDS the chunks. So as soon as the chunk arrives it will be appended to the main file in 'real time' so to speak. It won't take up a any memory and CPU clock to do this work on the server. (memory consumption is only the current chunk size).
Let's say you have a 18MB file and halfway the uploading process stops for whatever reason. Now on the server you will have a a 9MB file, and not a lots of small 'chunks'. So if you're afraid that the server have to merge all the little chucks after the upload then don't afraid. It's not the way it works.