lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Oct 2009 14:20:59 +0100
From:	Corrado Zoccolo <>
To:	Jens Axboe <>
Cc:	Jeff Moyer <>,
Subject: Re: [PATCH/RFC 0/4] cfq: implement merging and breaking up of 

Hi Jens
On Mon, Oct 26, 2009 at 12:40 PM, Jens Axboe <> wrote:
> On Sat, Oct 24 2009, Corrado Zoccolo wrote:
>> You identified the problem in the idling logic, that reduces the
>> throughput in this particular scenario, in which various threads or
>> processes issue (in random order) the I/O requests with different I/O
>> contexts on behalf of a single entity.
>> In this case, any idling between those threads is detrimental.
>> Ideally, such cases should be already spotted, since think time should
>> be high for such processes, so I wonder if this indicates a problem in
>> the current think time logic.
> That isn't necessarily true, it may just as well be that there's very
> little think time (don't see the connection here). A test case to
> demonstrate this would be a number of processes/threads splitting a
> sequential read of a file between them.

Jeff said that the huge performance drop was not observable with noop
or any other work conserving scheduler.
Since noop doesn't enforce any I/O ordering, but just ensures that any
I/O passes through ASAP,
this means that the biggest problem is due to idling, while the
increased seekiness has just a small impact.

So your test case doesn't actually match the observations: each thread
will always have new requests to submit (so idling doesn't penalize
too much here), while the seekiness introduced will be the most
important factor.

I think the real test case is something like (single dd through nfs via udp):
* there is a single thread, that submits a small number of requests
(e.g. 2) to a work queue, and wait for their completion before
submitting new requests
* there is a thread pool that executes those requests (1 thread runs 1
request), and signals back completion. Threads in the pool are
selected randomly.

In this case, the average think time should be > the average access
time, as soon as we have that the number of threads exceeds


> --
> Jens Axboe
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists