lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 12 Nov 2009 13:16:48 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	Linux-Kernel <linux-kernel@...r.kernel.org>,
	Jeff Moyer <jmoyer@...hat.com>, aaronc@...ato.unsw.edu.au
Subject: Re: [RFC, PATCH] cfq-iosched: remove redundant queuing detection
	code

On Tue, Nov 10 2009, Corrado Zoccolo wrote:
> On Tue, Nov 10, 2009 at 4:14 PM, Jens Axboe <jens.axboe@...cle.com> wrote:
> > On Tue, Nov 10 2009, Corrado Zoccolo wrote:
> >> The core block layer already has code to detect presence of command
> >> queuing devices. We convert cfq to use that instead of re-doing the
> >> computation.
> >
> > There's is the major difference that the CFQ variant is dynamic and the
> > block layer one is not. This change came from Aaron some time ago IIRC,
> > see commit 45333d5. It's a bit of a chicken and egg problem.
> 
> The comment by Aaron:
>     CFQ's detection of queueing devices assumes a non-queuing device and detects
>     if the queue depth reaches a certain threshold.  Under some workloads (e.g.
>     synchronous reads), CFQ effectively forces a unit queue depth,
> thus defeating
>     the detection logic.  This leads to poor performance on queuing hardware,
>     since the idle window remains enabled.
> 
> makes me think that the dynamic-off detection in cfq may really be
> buggy (BTW this could explain the bad results on SSD Jeff observed
> before my patch set).
> The problem is, that once the hw_tag is 0, it is difficult for it to
> become 1 again, as explained by Aaron, since cfq will hardly send more
> than 1 request at a time. My patch set fixes this for SSDs (the seeky
> readers will still be sent without idling, and if they are enough, the
> logic will see a large enough depth to reconsider the initial
> decision).
> 
> So the only sound way to do the detection is to start in an
> indeterminate state, in which CFQ behaves as if hw_tag = 1, and then,
> if for a long observation period we never saw large depth, we switch
> to hw_tag = 0, otherwise we stick to hw_tag = 1, without reconsidering
> it.

That is probably the better way to do it, as I said earlier it is indeed
a chicken and egg problem. Care to patch something like that up?

> I think the correct logic could be pushed to the blk-core, by
> introducing also an indeterminate bit.

And I still don't think that is a good idea. The block layer case cares
more about the capability side ("is this a good ssd?") where as the CFQ
case incorporates process behaviour as well. I'll gladly take patches to
improve the CFQ logic.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ