lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 26 Oct 2009 14:28:16 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH/RFC 0/4] cfq: implement merging and breaking up of
	cfq_queues

On Mon, Oct 26 2009, Corrado Zoccolo wrote:
> Hi Jens
> On Mon, Oct 26, 2009 at 12:40 PM, Jens Axboe <jens.axboe@...cle.com> wrote:
> > On Sat, Oct 24 2009, Corrado Zoccolo wrote:
> >> You identified the problem in the idling logic, that reduces the
> >> throughput in this particular scenario, in which various threads or
> >> processes issue (in random order) the I/O requests with different I/O
> >> contexts on behalf of a single entity.
> >> In this case, any idling between those threads is detrimental.
> >> Ideally, such cases should be already spotted, since think time should
> >> be high for such processes, so I wonder if this indicates a problem in
> >> the current think time logic.
> >
> > That isn't necessarily true, it may just as well be that there's very
> > little think time (don't see the connection here). A test case to
> > demonstrate this would be a number of processes/threads splitting a
> > sequential read of a file between them.
> 
> Jeff said that the huge performance drop was not observable with noop
> or any other work conserving scheduler.
> Since noop doesn't enforce any I/O ordering, but just ensures that any
> I/O passes through ASAP,
> this means that the biggest problem is due to idling, while the
> increased seekiness has just a small impact.

Not true, noop still does merging. And even if it didn't, if you have
queuing on the device side things may still work out. The key being that
you actually send those off to the device, which the idling will prevent
for CFQ. The biggest problem is of course due to idling, if we didn't
idle between the cooperating processes then there would not be an issue.
And this is exactly what Jeff has done, merge those.

The test app is of course timing sensitive to some degree, since if the
threads get too far out of sync then things will go down the drain.

You could argue that decreasing the seekiness threshold would "fix"
that, but that would surely not work for other cases where and app is
mostly sequential but has to fetch meta data and such.

> So your test case doesn't actually match the observations: each thread
> will always have new requests to submit (so idling doesn't penalize
> too much here), while the seekiness introduced will be the most
> important factor.
> 
> I think the real test case is something like (single dd through nfs via udp):
> * there is a single thread, that submits a small number of requests
> (e.g. 2) to a work queue, and wait for their completion before
> submitting new requests
> * there is a thread pool that executes those requests (1 thread runs 1
> request), and signals back completion. Threads in the pool are
> selected randomly.

Same thing, you just get rid of the timing constraint. A test case would
ensure that as well.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ