lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <D680AFCF-11EC-48AD-BBC2-B92521DF442A@kernel.crashing.org>
Date:	Sun, 10 Sep 2006 22:01:20 +0200
From:	Segher Boessenkool <segher@...nel.crashing.org>
To:	Jesse Barnes <jbarnes@...tuousgeek.org>
Cc:	David Miller <davem@...emloft.net>, jeff@...zik.org,
	paulus@...ba.org, torvalds@...l.org, linux-kernel@...r.kernel.org,
	benh@...nel.crashing.org, akpm@...l.org
Subject: Re: Opinion on ordering of writel vs. stores to RAM

>>> As (I think) BenH mentioned in another email, the normal way Linux
>>> handles these interfaces is for the primary API (readX, writeX) to
>>> be strongly ordered, strongly coherent, etc.  And then there is a
>>> relaxed version without barriers and syncs, for the smart guys who
>>> know what they're doing
>>
>> Indeed, I think that is the way to handle this.
>
> Well why didn't you guys mention this when mmiowb() went in?
>
> I agree that having a relaxed PIO ordering version of writeX makes  
> sense
> (jejb convinced me of this on irc the other day).  But what to name  
> it?
> We already have readX_relaxed, but that's for PIO vs. DMA ordering,  
> not
> PIO vs. PIO.  To distinguish from that case maybe writeX_weak or
> writeX_nobarrier would make sense?

Why not just keep writel() etc. for *both* purposes; the address cookie
it gets as input can distinguish between the required behaviours for
different kinds of I/Os; it will have to be setup by the arch-specific
__ioremap() or similar.  And then there could be a  
pci_map_mmio_relaxed()
that, when a driver uses it, means "this driver requires only PCI
ordering rules for its MMIO, not x86 rules", so the driver promises to
do wmb()'s for DMA ordering and the mmiowb() [or whatever they become --
pci_mem_to_dma_barrier() and pci_cpu_to_cpu_barrier() or whatever] etc.

Archs that don't see the point in doing (much) faster and scalable I/O,
or just don't want to bother, can have this pci_map_mmio() just call
their x86-ordering ioremap() stuff.

The impact on architectures is minimal; they all need to implement this
new mapping function, but can just default it if they don't (yet) want
to take advantage of it.  Archs that do care need to use the I/O cookie
as an actual cookie, not directly as a CPU address to write to, and then
use the proper ordering stuff for the kind of ordering the cookie says
is needed for this I/O range.

The impact on device drivers is even more minimal; drivers don't have to
change; and if they don't, they get x86 I/O ordering semantics.  Drivers
that do want to be faster on other architectures, and have been  
converted
for it (and most interesting ones have), just need to change the  
ioremap()
of their MMIO space to the new API.

I hope this way everyone can be happy -- no one is forced to change,  
only
one new API is introduced.


Segher

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ