[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49F8B1A1.4010208@garzik.org>
Date: Wed, 29 Apr 2009 15:59:29 -0400
From: Jeff Garzik <jeff@...zik.org>
To: Roland Dreier <rdreier@...co.com>
CC: Ingo Molnar <mingo@...e.hu>, David Miller <davem@...emloft.net>,
Linus Torvalds <torvalds@...ux-foundation.org>, hpa@...or.com,
tglx@...utronix.de, h.mitake@...il.com, rpjday@...shcourse.ca,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86: Remove readq()/writeq() on 32-bit
Roland Dreier wrote:
> > This removal patch is completely pointless, because it moves us
> > backwards to the point where we had a bunch of drivers defining it.
>
> No, the current kernel still requires drivers to define it anyway,
> because there are tons of 32-bit architectures that are not x86.
Then let's fix that issue... by propagating the common definition to
other platforms that properly implement {read,write}[bwl] in terms of
the PCI bus.
> And more than that, centralizing the definition makes the API much more
> dangerous for driver authors.
I think that's really cranking the hyperbole level to 11.
The common definition is... the one found most commonly in the wild.
For weird drivers, they will do their own thing.
That's pretty much how other drivers handle things.
Apply your logic here to _any_ API in the kernel, for the same result.
> > At least the networking drivers I messed with (until 11/2008) were
> > always fine with a non-atomic readq.
>
> The commit to niu I keep citing (e23a59e1, "niu: Fix readq
> implementation when architecture does not provide one.") shows that
> drivers need to take care. Now, the x86 implementation would happen to
That commit also shows that, had the driver been using a common
definition, problems would not have arisen.
> work for that hardware, but eg drivers/infiniband/hw/amso1100 defines
> readq with the opposite order -- whether that's required or just an
'required' seems unlikely, given that
a) their readq only exists when #ifndef readq, thus implying the
driver-local readq is far less tested, on their most-tested,
highest-volume platform.
b) their readq still operates in LE order -- as it should:
read,write[bwl] were defined in terms of PCI originally, and thus
defined to be LE.
c) their __raw_writeq writes in lower-32-bits-first, as one would expect
> arbitrary choice, I don't know. And drivers/infiniband/hw/mthca has
> some uses of __raw_writeq() that only work if no other CPU accesses to
> the same page can happen between the two halves, so it adds a per-page
> spinlock for 32-bit architectures. etc.
Any use of __raw_xxx implies that You Know What You're Doing And Accept
The Consequences. __raw_xxx means _you_ handle endian conversions,
barriers, and other arch-specific details. I don't think that a driver
intentionally using the "raw" APIs is a good source of ideas and
generalizations.
So, for your three examples,
1) niu - common definition is OK
2) amso1100 - common definition is OK; driver-local definition
never used on common PCI platforms
3) mthca - intentionally uses raw API, an API which ditches
arch-specific barriers, endian conversions, and other
guarantees.
Given that, I see zero justification for API removal. I see
justification for propagating this code to other PCI-capable platforms.
Finally, I think given all this time we've had driver-define writeq and
readq, and "driver authors were forced to think about this API" -- the
result was the obvious definition now in place!
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists