[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1202164135.3096.126.camel@localhost.localdomain>
Date: Mon, 04 Feb 2008 16:28:55 -0600
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Nick Piggin <npiggin@...e.de>,
"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
linux-kernel@...r.kernel.org, arjan@...ux.intel.com, mingo@...e.hu,
ak@...e.de, andrea@...e.de, clameter@....com,
akpm@...ux-foundation.org, andrew.vasquez@...gic.com,
willy@...ux.intel.com, Zach Brown <zach.brown@...cle.com>
Subject: Re: [rfc] direct IO submission and completion scalability issues
On Mon, 2008-02-04 at 05:33 -0500, Jens Axboe wrote:
> As Andi mentions, we can look into making that lockless. For the initial
> implementation I didn't really care, just wanted something to play with
> that would nicely allow me to control both the submit and complete side
> of the affinity issue.
Sorry, late to the party ... it went to my steeleye address, not my
current one.
Could you try re-running the tests with a low queue depth (say around 8)
and the card interrupt bound to a single CPU.
The reason for asking you to do this is that it should emulate almost
precisely what you're looking for: The submit path will be picked up in
the SCSI softirq where the queue gets run, so you should find that all
submit and returns happen on a single CPU, so everything gets cache hot
there.
James
p.s. if everyone could also update my email address to the
hansenpartnership one, the people at steeleye who monitor my old email
account would be grateful.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists