Results 52 comments of Håkan Johansson

When this happens, if the chunkserver at least tells which HDD is stalling all the workers would be nice? Even if the chunkserver cannot protect itself, at least the monitoring...

One way to not end up in the non-terminable i/o operations could be to 'sleep' (waiting for an active-io-count to drop below threshold) before entering those potentially stalling operations if...

If I understand the responses of @acid-maker correctly, the MFS client will only have two outstanding read requests at the same time? I.e. it can for single-threaded reading only achieve...

For my use-case it would indeed be interesting to use much more of the available disk bandwidths for one or a few readers. To just try this out, I made...

Compression could leverage the very handy storage classes, such that compression happens lazily: A user could request data to be cheaply compressed using e.g. lz4 when the chunks get into...

This is from #132 by @trabucayre: > My worry about spiOverJtag directory size is primarly to the download/clone step. Xilinx bitstream, with compress option enabled, have an acceptable size, but...

Runtime penalty for decompression would be very small. E.g. zcat of 233 kB .gz file containing all the .bin and .rbf files (to 31 MB) takes 0.14 s on a...

I did not want to suggest to put all bitstreams in one compressed file. That was only to test the decompression speed.

Related question: In the storage class [manual](https://moosefs.com/Content/Downloads/moosefs-storage-classes-manual.pdf), section 2.9 it sounds like `strict` mode applies to the original chunk creation, and not the migration from create to keep or keep...

@chogata Thanks for the more detailed info. What would then happen with data marked for several archive copies if all such servers are permanently full in strict more, e.g. `-C...