Improving file catalogue
Currently files must fit into a 256 char array. Files smaller can be fixed with strTrim but bigger files cannot be read entirely.
I just came up with a fix. Oops!
Wouldn't it be better to read it in N bytes parts? (for some constant N)
Yeah
But what happened before is that it would only read 256 bytes
I should have the kernel read 256 every chunk
and multiple chunks every file
(Correct?)
I think so. The current implementation allocates memory for the whole file. For small files it's good enough (and without multitasking there is no reason to ever "cat" bigger files. But when/if the OS gets real filesystem support - it current implementation will break).
K got it. So 256 chars per chunk?
For now 256 should be ok. It can be changed if needed. If you really wanted you cold read byte by byte. But it would be much slower.
I (for some reason) tried my way, and paging fault will appear!
):
I think in Barteks2x branch, there is a partial solve (dunno)
ill look at his fork \
k