lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0c897b24b234f8d42ef597dbe31fa8293519a4b2.camel@vmware.com>
Date:   Wed, 10 Apr 2019 15:01:14 +0000
From:   Thomas Hellstrom <thellstrom@...are.com>
To:     "hch@....de" <hch@....de>
CC:     "torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Deepak Singh Rawat <drawat@...are.com>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: Re: revert dma direct internals abuse

On Wed, 2019-04-10 at 08:43 +0200, hch@....de wrote:
> On Tue, Apr 09, 2019 at 05:24:48PM +0000, Thomas Hellstrom wrote:
> > > Note that this only affects external, untrusted devices.  But
> > > that
> > > may include eGPU,
> > 
> > What about discrete graphics cards, like Radeon and Nvidia? Who
> > gets to
> > determine what's trusted?
> 
> Based on firmware tables.  discrete graphics would not qualify unless
> they are attached through thunderbolt bridges or external PCIe ports.
> 
> > GPU libraries traditionally have been taking care of the CPU
> > mapping
> > caching modes since the first AGP drivers. GPU MMU ptes commonly
> > support various caching options and pages are changing caching mode
> > dynamically.
> > So even if the DMA layer needs to do the remapping, couldn't we do
> > that
> > on-demand when needed with a simple interface?
> 
> The problem is that there is no "simple" interface as the details
> depend on the architecture.  We have the following base variants
> to create coherent memory:
> 
>   1) do nothing - this works on x86-like platforms where I/O is
> always
>      coherent
>   2) use a special kernel segment, after flushing the caches for the
>      normal segment, done on platforms like mips that have this
>      special segment
>   3) remap the existing kernel direct mapping, after flushing the
>      caches, done by openrisc and in some cases arm32
>   4) create a new mapping in vmap or ioremap space after flushing the
>      caches - done by most architectures with an MMU and non-coherent
>      devices
>   5) use a special pool of uncached memory set aside by the hardware
>      or firmware - done by most architectures without an MMU but with
>      non-coherent devices
> 

Understood. Unfortunately IMO this severly limits the use of the
dma_alloc_coherent() method.

> So that is just five major variants with a lot of details on how
> it is done in practice.  Add to that that many of the operations
> are fairly expensive and need to be pre-loaded.
> 
> > > That being said: your driver already uses the dma coherent API
> > > under various circumstances, so you already have the above
> > > issues.
> > 
> > Yes, but they are hidden behind driver options. We can't have
> > someone
> > upgrade their kernel and suddenly things don't work anymore, That
> > said,
> > I think the SWIOTLB case is rare enough for the below solution to
> > be
> > acceptable, although the TTM check for the coherent page pool being
> > available still needs to remain.
> 
> So can you please respin a version acceptable to you and submit it
> for 5.1 ASAP?  Otherwise I'll need to move ahead with the simple
> revert.

I will. 
I need to do some testing to investigate how to best choose between the
options, but will have something ready for 5.1.

/Thomas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ