lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080822090620.GY20055@kernel.dk>
Date:	Fri, 22 Aug 2008 11:06:21 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Aaron Carroll <aaronc@...ato.unsw.edu.au>
Cc:	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cfq-iosched: fix queue depth detection

On Fri, Aug 22 2008, Aaron Carroll wrote:
> Hi Jens,
> 
> This patch fixes a bug in the hw_tag detection logic causing a huge 
> performance
> hit under certain workloads on real queuing devices.  For example, an FIO 
> load
> of 16k direct random reads on an 8-disk hardware RAID yields about 2 MiB/s 
> on
> default CFQ, while noop achieves over 20 MiB/s.
> 
> While the solution is pretty ugly, it does have the advantage of adapting to
> queue depth changes.  Such a situation might occur if the queue depth is
> configured in userspace late in the boot process.

I don't think it's that ugly, and I prefer this logic to the existing
one in fact. Since it's a static property of the device, why did you
change it to toggle the flag back and forth instead of just setting it
once? Defaulting to tagging on is fine, otherwise we risk running into
the problem you describe where CFQ never attempts to queue > 1 request.
Then you'd want to see if the driver ever asks more requests while one
is in the driver, if it does it's definitely TCQ. If not, then it
doesn't do queueing. So the interesting window is the one where we have
more requests pending yet the driver doesn't ask for it. I'd prefer a
patch that took that more into account, instead of just looking at the
past 50 samples and then toggle the hw_tag flag depending on the
behaviour in that time frame. You could easily have a depth of 1 there
always if it's a sync workload, even if hardware can do tagged queuing.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ