[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1195598343.6970.50.camel@pasglop>
Date: Wed, 21 Nov 2007 09:39:03 +1100
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: James Bottomley <James.Bottomley@...senPartnership.com>
Cc: Roland Dreier <rdreier@...co.com>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
David Miller <davem@...emloft.net>,
linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
rmk@....linux.org.uk
Subject: Re: SCSI breakage on non-cache coherent architectures
On Tue, 2007-11-20 at 15:10 -0600, James Bottomley wrote:
> We're talking about trying to fix this for 2.4; which is already at
> -rc3 ... Is an entire arch change for dma alignment really a merge
> candidate at this stage?
Well, as I said before... it's a matter of what seems to be the less
likely to break something right ?
On one side, I'm doing surgery on code I barely know, the scsi error
handling, and now it seems I also have to fixup a handful of drivers
that aren't the most obvious pieces of code around.
On the other side, Roland proposal is basically just adding a macro that
can be empty for everybody but a handful of archs, and stick it onto one
field in one structure...
The later has about 0 chances to actually break something or cause a
regression. I wouldn't say that about the former.
Now, I will see if I manage to fixup the NCR drivers to pass a
pre-allocated buffer (USB storage I think can pass NULL as it's not
calling prep in atomic context). But then, it complicates the matter
because that means "restore" will have to know whether prep allocated
the buffer or not, thus more fields to add to the save struct, it's
getting messy, unless we decide -all- callers are responsible for the
buffer allocation (hrm... maybe the best approach).
Ben.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists