lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091228033554.GB15242@sli10-desk.sh.intel.com>
Date:	Mon, 28 Dec 2009 11:35:54 +0800
From:	Shaohua Li <shaohua.li@...el.com>
To:	Corrado Zoccolo <czoccolo@...il.com>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
	"Zhang, Yanmin" <yanmin.zhang@...el.com>
Subject: Re: [RFC]cfq-iosched: quantum check tweak

On Fri, Dec 25, 2009 at 05:44:40PM +0800, Corrado Zoccolo wrote:
> On Fri, Dec 25, 2009 at 10:10 AM, Shaohua Li <shaohua.li@...el.com> wrote:
> > Currently a queue can only dispatch up to 4 requests if there are other queues.
> > This isn't optimal, device can handle more requests, for example, AHCI can
> > handle 31 requests. I can understand the limit is for fairness, but we could
> > do some tweaks:
> > 1. if the queue still has a lot of slice left, sounds we could ignore the limit
> ok. You can even scale the limit proportionally to the remaining slice
> (see below).
I can't understand the meaning of below scale. cfq_slice_used_soon() means
dispatched requests can finish before slice is used, so other queues will not be
impacted. I thought/hope a cfq_slice_idle time is enough to finish the
dispatched requests.
 
> > 2. we could keep the check only when cfq_latency is on. For uses who don't care
> > about latency should be happy to have device fully piped on.
> I wouldn't overload low_latency with this meaning. You can obtain the
> same by setting the quantum to 32.
As this impact fairness, so natually thought we could use low_latency. I'll remove
the check in next post.

> > I have a test of random direct io of two threads, each has 32 requests one time
> > without patch: 78m/s
> > with tweak 1: 138m/s
> > with two tweaks and disable latency: 156m/s
> 
> Please, test also with competing seq/random(depth1)/async workloads,
> and measure also introduced latencies.
depth1 should be ok, as if device can only send one request, it should not require
more requests from ioscheduler.
I'll do more checks. The time is hard to choose (I choose cfq_slice-idle here) to
balance thoughput and latency. Do we have creteria to measure this? See the patch
passes some tests, so it's ok for latency.

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ