lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Nov 2009 12:44:41 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	"Alan D. Brunelle" <Alan.Brunelle@...com>
Cc:	Corrado Zoccolo <czoccolo@...il.com>, linux-kernel@...r.kernel.org,
	jens.axboe@...cle.com
Subject: Re: [RFC] Block IO Controller V2 - some results

On Tue, Nov 17, 2009 at 12:30:07PM -0500, Alan D. Brunelle wrote:
> On Tue, 2009-11-17 at 11:40 -0500, Vivek Goyal wrote:
> > On Tue, Nov 17, 2009 at 05:17:53PM +0100, Corrado Zoccolo wrote:
> > > Hi Vivek,
> > > the performance drop reported by Alan was my main concern about your
> > > approach. Probably you should mention/document somewhere that when the
> > > number of groups is too large, there is large decrease in random read
> > > performance.
> > > 
> > 
> > Hi Corrodo,
> > 
> > I thought more about it. We idle on sync-noidle group only in case of 
> > rotational media not supporting NCQ (hw_tag = 0). So for all the fast
> > hardware out there (SSD and fast arrays), we should not be idling on 
> > sync-noidle group hence should not additional idling per group.
> > 
> > This is all subjected to the fact that we have done a good job in
> > detecting the queue depth and have updated hw_tag accordingly.
> > 
> > On slower rotational hardware, where we will actually do idling on
> > sync-noidle per group, idling can infact help you because it will reduce
> > the number of seeks (As it does on my locally connected SATA disk).
> > 
> > > However, we can check few things:
> > > * is this kernel built with HZ < 1000? The smallest idle CFQ will do
> > > is given by 2/HZ, so running with a small HZ will increase the impact
> > > of idling.
> > > 
> > > On Tue, Nov 17, 2009 at 3:14 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> > > > Regarding the reduced throughput for random IO case, ideally we should not
> > > > idle on sync-noidle group on this hardware as this seems to be a fast NCQ
> > > > supporting hardware. But I guess we might not be detecting the queue depth
> > > > properly which leads to idling on per group sync-noidle workload and
> > > > forces the queue depth to be 1.
> > > 
> > > * This can be ruled out testing my NCQ detection fix patch
> > > (http://groups.google.com/group/linux.kernel/browse_thread/thread/3b62f0665f0912b6/34ec9456c7da1bb7?lnk=raot)
> > 
> > This will be a good patch to test here. Alan, can you also apply this
> > patch and see if we see any improvement.
> 
> Vivek: Do you want me to move this over to the V3 version & apply this
> patch, or stick w/ V2?

Alan,

Anthing is good. V3 is not very different from V2. May be move to V3 with
above patch applied and see if helps.

At the end of the day, you will not see improvement with group_idle=1 as
each group gets exclusive access to underlying array. But I am expecting
to see improvement with group_idle=0.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ