[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzcqhHU_+uR33=q+49jK97e=EofFF7g7Z0K1RU_PGwMTw@mail.gmail.com>
Date: Wed, 22 Aug 2012 10:54:41 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Ben Hutchings <bhutchings@...arflare.com>
Cc: "H. Peter Anvin" <hpa@...or.com>,
David Laight <David.Laight@...lab.com>,
Benjamin LaHaise <bcrl@...ck.org>,
David Miller <davem@...emloft.net>, tglx@...utronix.de,
mingo@...hat.com, netdev@...r.kernel.org,
linux-net-drivers@...arflare.com, x86@...nel.org
Subject: Re: [PATCH 2/3] x86_64: Define 128-bit memory-mapped I/O operations
On Wed, Aug 22, 2012 at 10:27 AM, Ben Hutchings
<bhutchings@...arflare.com> wrote:
>
> If this is right, how can it be safe to use readq/writeq at all?
Pray.
Or don't care about ordering: use hardware that is well-designed and
doesn't have crap interfaces that are fragile.
If you care about ordering, you need to do them as two separate
accesses, and have a fence in between. Which, quite frankly, sounds
like the right model for you *anyway*, since then you could use
write-combining memory and you might even go faster, despite an
explicit fence and thus a minimum of 2 transactions.
Seriously. If you care that deeply about the ordering of the bytes you
write out, MAKE THAT ORDERING VERY EXPLICIT IN THE SOURCE CODE. Don't
say "oh, with this hack, I win 100ns". You need to ask yourself: what
do you care about more? Going really fast on some machine that you can
test, or being safe?
With PCIe, it's *probably* fine to just say "we expect 64-bit accesses
to make it through unmolested".
The 128-bit case I really don't know about. It probably works too. But
while I'd call the 64-bit case almost certain (in the absence of truly
crap hardware), the 128-bit case I have a hard time judging how
certain it is going to be.
Linus
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists