[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090526151346.GB5816@parisc-linux.org>
Date: Tue, 26 May 2009 09:13:46 -0600
From: Matthew Wilcox <matthew@....cx>
To: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
jens.axboe@...cle.com, rdreier@...co.com, bharrosh@...asas.com,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
chris.mason@...cle.com, david@...morbit.com, hch@...radead.org,
akpm@...ux-foundation.org, jack@...e.cz,
yanmin_zhang@...ux.intel.com, linux-scsi@...r.kernel.org
Subject: Re: [PATCH 03/13] scsi: unify allocation of scsi command and sense
buffer
On Tue, May 26, 2009 at 09:47:02AM -0500, James Bottomley wrote:
> > Yeah, we can inline the sense buffer but as we discussed in the past
> > several times, there are some good reasons that we should not do so, I
> > think.
>
> There are several other approaches:
>
> 1. Keep the sense buffer packed in the command but disallow DMA to
> it, which fixes all the alignment problems. Then we supply a
> set of rotating DMA buffers to drivers which need to do the DMA
> (which isn't the majority).
> 2. Sense is a comparative rarity, so us a more compact pooling
> scheme and discard sense for reuse as soon as we know it's not
> used (as in at softirq time when there's no sense collected).
>
> I'd need a little more clarity on the actual size of the problem before
> making any choices.
I'm not sure if this is what you meant by option 2 or not, but one
proposal was to keep a number of sense buffers around per-host, and only
allocate extras when we run close to empty.
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists