[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 05 May 2008 08:46:10 -0400
From: "Alan D. Brunelle" <Alan.Brunelle@...com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Cc: Jens Axboe <jens.axboe@...cle.com>
Subject: Re: More io-cpu-affinity results: queue_affinity + rq_affinity
Alan D. Brunelle wrote:
> Continuing to evaluate the potential benefits of the various I/O & CPU
> affinity options proposed in Jens' origin/io-cpu-affinity branch...
>
> Executive summary (due to the rather long-winded nature of this post):
>
> We again see rq_affinity has positive potential, but not (yet) able to
> see much benefit to adjusting queue_affinity.
>
> ========================================================
>
<snip>
>
> =====================================================
>
> As noted above, I'm going to do a series of runs to make sure this data
> holds over a larger data set (in particular the case where I/O is far -
> looking at QAF on & far to see if the 0.56% is truly representative).
> Suggestions for other tests to try and show/determine queue_affinity
> benefits are very welcome.
The averages (+ min/max error bars) for the reads/second & p_system
values when taken over 50 runs of the test can be seen at:
http://free.linux.hp.com/~adb/jens/r_s_50.png
and
http://free.linux.hp.com/~adb/jens/p_system_50.png
respectively. Still shows a potential big win w/ rq_affinity set to 1,
not much difference at all w/ queue_affinity settings (in fact, not
seeing any real movement at all when rq_affinity=1).
I'd still be willing to try other test scenarios to show how
queue_affinity can really help, but as for now, I'd suggest removing
that functionality for the present - getting rid of some code until such
time as we can prove its worth.
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists