lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71740323.RRBhnhefTi@np-p-burton>
Date:   Tue, 26 Sep 2017 11:48:19 -0700
From:   Paul Burton <paul.burton@...tec.com>
To:     David Miller <davem@...emloft.net>
CC:     <matt.redfearn@...tec.com>, <netdev@...r.kernel.org>,
        <alexandre.torgue@...com>, <peppe.cavallaro@...com>,
        <linux-kernel@...r.kernel.org>, <linux-mips@...ux-mips.org>,
        <james.hogan@...tec.com>
Subject: Re: [PATCH] net: stmmac: Meet alignment requirements for DMA

Hi David,

On Tuesday, 26 September 2017 10:34:00 PDT David Miller wrote:
> From: Matt Redfearn <matt.redfearn@...tec.com>
> Date: Tue, 26 Sep 2017 14:57:39 +0100
> 
> > Since the MIPS architecture requires software cache management,
> > performing a dma_map_single(TO_DEVICE) will writeback and invalidate
> > the cachelines for the required region. To comply with (our
> > interpretation of) the DMA API documentation, the MIPS implementation
> > expects a cacheline aligned & sized buffer, so that it can writeback a
> > set of complete cachelines. Indeed a recent patch
> > (https://patchwork.linux-mips.org/patch/14624/) causes the MIPS dma
> > mapping code to emit warnings if the buffer it is requested to sync is
> > not cachline aligned. This driver, as is, fails this test and causes
> > thousands of warnings to be emitted.
> 
> For a device write, if the device does what it is told and does not
> access bytes outside of the periphery of the DMA mapping, nothing bad
> can happen.
> 
> When the DMA buffer is mapped, the cpu side cachelines are flushed (even
> ones which are partially part of the requsted mapping, this is _fine_).

This is not fine. Let's presume we writeback+invalidate the cache lines that 
are only partially covered by the DMA mapping at its beginning or end, and 
just invalidate all the lines that are wholly covered by the mapping. So far 
so good.

> The device writes into only the bytes it was given access to, which
> starts at the DMA address.

OK - still fine but *only* if we don't write to anything that happens to be 
part of the cache lines that are only partially covered by the DMA mapping & 
make them dirty. If we do then any writeback of those lines will clobber data 
the device wrote, and any invalidation of them will discard data the CPU 
wrote.

How would you have us ensure that such writes don't occur?

This really does not feel to me like something to leave up to drivers without 
any means of checking or enforcing correctness. The requirement to align DMA 
mappings at cache-line boundaries, as outlined in Documentation/DMA-API.txt, 
allows this to be enforced. If you want to ignore this requirement then there 
is no clear way to verify that we aren't writing to cache lines involved in a 
DMA transfer whilst the transfer is occurring, and no sane way to handle those 
cache lines if we do.

> The unmap and/or sync operation after the DMA transfer needs to do
> nothing on the cpu side since the map operation flushed the cpu side
> caches already.
> 
> When the cpu reads, it will pull the cacheline from main memory and
> see what the device wrote.

This is not true, because:

1) CPUs may speculatively read data. In between the invalidations that we did
   earlier & the point at which the transfer completes, the CPU may have
   speculatively read any arbitrary part of the memory mapped for DMA.

2) If we wrote to any lines that are only partially covered by the DMA mapping
   then of course they're valid & dirty, and an access won't fetch from main
   memory.

We therefore need to perform cache invalidation again at the end of the 
transfer - on MIPS you can grep for cpu_needs_post_dma_flush to find this. 
>From a glance ARM has similar post-DMA invalidation in 
__dma_page_dev_to_cpu(), ARM64 in __dma_unmap_area() etc etc.

At this point what would you have us do with cache lines that are only 
partially covered by the DMA mapping? As above - if we writeback+invalidate we 
risk clobbering data written by the device. If we just invalidate we risk 
discarding data written by the CPU. And this is even ignoring the risk that a 
writeback of one of these lines happens due to eviction during the DMA 
transfer, which we have no control over at all.

> It's not like the device can put "garbage" into the bytes earlier in
> the cacheline it was given partial access to.

Right - but the CPU can "put garbage" into the bytes the device was given 
access to, simply by fetching stale bytes from main memory before the device 
overwrites them.

> I really don't see what the problem is and why MIPS needs special
> handling.  You will have to give specific examples, step by step,
> where things can go wrong before I will be able to consider your
> changes.

MIPS doesn't need special handling - it just needs the driver to obey a 
documented restriction when using the DMA API. One could argue that it ought 
to be you who justifies why you feel networking drivers can ignore the 
documentation, and that if you disagree with the documented API you should aim 
to change it rather than ignore it.

Thanks,
    Paul
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ