> Direct I/O means no more fsync: no more complexity via background flushes and optimal scheduling of syncs. There's no kernel overhead from copying and coalescing. It essentially provides the performance, control, and simplicity of issuing raw 1:1 I/O requests.
Not true, you still need fsync in direct I/O to ensure durability in power loss situations. Some drives have write caches that means acknowledged writes live in non-volatile memory. So maybe the perf is wildly better because you’re sacrificing durability?
That’s a lot of work creating a whole system that stores data on a raw block device. It would be nice to see this compared to… a filesystem. XFS, ZFS and btrfs are pretty popular.
> Despite serving from same-region datacenters 2 ms from the user, S3 would take 30-200 ms to respond to each request.
200ms seems fairly reasonable to me once we factor in all of the other aspects of S3. A lot of machines would have to die at Amazon for your data to become at risk.
It's a bit weird to present it as an alternative to S3 when it looks like a persistent cache or k/v store. A benchmark against Redis would have been nice for example.
The benchmark for rocks DB is also questionable as the performance depends a lot on how you configure it, and the article's claim that it doesn't support range read doesn't give me confidence in the results.
Also for the descried issue of small images for a frontend, nobody would serve directly from S3 without a caching layer on top.
It's a interesting read for fun, but I am not sure what it solves in the end.
> Direct I/O means no more fsync: no more complexity via background flushes and optimal scheduling of syncs. There's no kernel overhead from copying and coalescing. It essentially provides the performance, control, and simplicity of issuing raw 1:1 I/O requests.
Not true, you still need fsync in direct I/O to ensure durability in power loss situations. Some drives have write caches that means acknowledged writes live in non-volatile memory. So maybe the perf is wildly better because you’re sacrificing durability?
That’s a lot of work creating a whole system that stores data on a raw block device. It would be nice to see this compared to… a filesystem. XFS, ZFS and btrfs are pretty popular.
I don't quite understand the point, why would anybody use S3 then ?
> Despite serving from same-region datacenters 2 ms from the user, S3 would take 30-200 ms to respond to each request.
200ms seems fairly reasonable to me once we factor in all of the other aspects of S3. A lot of machines would have to die at Amazon for your data to become at risk.
Similar systems include Facebook's Haystack and its open source equivalent, SeaweedFS.
Interesting project but lack of S3 protocol compatibility and fact it seems to YOLO your data means it's not acceptable for many.
And means it is acceptable for many others. There is a whole world outside of s3 you know.
It's a bit weird to present it as an alternative to S3 when it looks like a persistent cache or k/v store. A benchmark against Redis would have been nice for example. The benchmark for rocks DB is also questionable as the performance depends a lot on how you configure it, and the article's claim that it doesn't support range read doesn't give me confidence in the results.
Also for the descried issue of small images for a frontend, nobody would serve directly from S3 without a caching layer on top.
It's a interesting read for fun, but I am not sure what it solves in the end.