[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200907220001.59388.arnd@arndb.de>
Date: Wed, 22 Jul 2009 00:01:59 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jiri Slaby <jirislaby@...il.com>, Pekka Paalanen <pq@....fi>,
linux-kernel@...r.kernel.org
Subject: Re: Do cpu-endian MMIO accessors exist?
On Tuesday 21 July 2009, Christoph Hellwig wrote:
> Why would you want to do that? That just means a useless byteswap.
> We really should have a generic native-endian MMIO-access API as there
> is quite a bit of hardware with features like that, and currently we
> have a miriad of hacks using __raw_* and manual barriers, the ppc
> specific accessors and god knows what.
The byte swap on powerpc I/O instructions is practically free
on all the interesting CPUs, and on the others it is still
swamped by the overhead of the synchronization. If you care
about the latency of MMIO instructions, going to explicit
synchronization would help much more, saving hundreds of
cycles per I/O rather than one cycle for a saved byte swap.
The powerpc in_le32 style functions are a completely different
story, they are basically defined to operate only on on-chip
components, while ioread32 and readl do work on PCI devices.
No portable code should ever use the __raw_* functions and
architecture specific barriers.
It would of course be easy to just define an API extension
to ioread along the lines of
#ifdef __BIG_ENDIAN
#define ioread16_native ioread16be
#define ioread32_native ioread32be
#define iowrite16_native iowrite16be
#define iowrite32_native iowrite32be
#else
#define ioread16_native ioread16
#define ioread32_native ioread32
#define iowrite16_native iowrite16
#define iowrite32_native iowrite32
#endif
but I'm not yet convinced that there is a potential user that
should not just be fixed in a different way.
Arnd <><
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists