lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20080509172510.GW16217@kernel.dk>
Date:	Fri, 9 May 2008 19:25:10 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	"Alan D. Brunelle" <Alan.Brunelle@...com>
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: More io-cpu-affinity results: queue_affinity + rq_affinity

On Mon, May 05 2008, Alan D. Brunelle wrote:
> Alan D. Brunelle wrote:
> > Continuing to evaluate the potential benefits of the various I/O & CPU
> > affinity options proposed in Jens' origin/io-cpu-affinity branch...
> > 
> > Executive summary (due to the rather long-winded nature of this post):
> > 
> > We again see rq_affinity has positive potential, but not (yet) able to
> > see much benefit to adjusting queue_affinity.
> > 
> > ========================================================
> > 
> <snip>
> > 
> > =====================================================
> > 
> > As noted above, I'm going to do a series of runs to make sure this data
> > holds over a larger data set (in particular the case where I/O is far -
> > looking at QAF on & far to see if the 0.56% is truly representative).
> > Suggestions for other tests to try and show/determine queue_affinity
> > benefits are very welcome.
> 
> 
> The averages (+ min/max error bars) for the reads/second & p_system
> values when taken over 50 runs of the test can be seen at:
> 
> http://free.linux.hp.com/~adb/jens/r_s_50.png
> 
> and
> 
> http://free.linux.hp.com/~adb/jens/p_system_50.png
> 
> respectively. Still shows a potential big win w/ rq_affinity set to 1,
> not much difference at all w/ queue_affinity settings (in fact, not
> seeing any real movement at all when rq_affinity=1).
> 
> 
> 
> I'd still be willing to try other test scenarios to show how
> queue_affinity can really help, but as for now, I'd suggest removing
> that functionality for the present - getting rid of some code until such
> time as we can prove its worth.

Thanks again for doing these numbers Alan, much appreciated! I've had a
hard time finding a use case for moving queuers as well, it's quite
costly. Moving completions are much cheaper, and queuing can typically
be moved in other ways so doing that at the IO level just seems like the
completely wrong thing to do. For keeping things affine on the queuing
side I have some other ideas that don't involve moving tasks around,
perhaps that'll be a good complement for that.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ