lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 6 Apr 2020 14:00:51 -0700
From:   Nicolin Chen <nicoleotsuka@...il.com>
To:     Robin Murphy <robin.murphy@....com>
Cc:     m.szyprowski@...sung.com, hch@....de, linux-kernel@...r.kernel.org,
        iommu@...ts.linux-foundation.org
Subject: Re: [RFC/RFT][PATCH] dma-mapping: set default segment_boundary_mask
 to ULONG_MAX

On Mon, Apr 06, 2020 at 02:48:13PM +0100, Robin Murphy wrote:
> On 2020-04-05 1:51 am, Nicolin Chen wrote:
> > The default segment_boundary_mask was set to DMA_BIT_MAKS(32)
> > a decade ago by referencing SCSI/block subsystem, as a 32-bit
> > mask was good enough for most of the devices.
> > 
> > Now more and more drivers set dma_masks above DMA_BIT_MAKS(32)
> > while only a handful of them call dma_set_seg_boundary(). This
> > means that most drivers have a 4GB segmention boundary because
> > DMA API returns a 32-bit default value, though they might not
> > really have such a limit.
> > 
> > The default segment_boundary_mask should mean "no limit" since
> > the device doesn't explicitly set the mask. But a 32-bit mask
> > certainly limits those devices capable of 32+ bits addressing.
> > 
> > And this 32-bit boundary mask might result in a situation that
> > when dma-iommu maps a DMA buffer (size > 4GB), iommu_map_sg()
> > cuts the IOVA region into discontiguous pieces, and creates a
> > faulty IOVA mapping that overlaps some physical memory outside
> > the scatter list, which might lead to some random kernel panic
> > after DMA overwrites that faulty IOVA space.
> 
> Once again, get rid of this paragraph - it doesn't have much to do with the
> *default* value since it describes a behaviour general to any boundary mask.
> Plus it effectively says "if a driver uses a DMA-mapped scatterlist
> incorrectly, this change can help paper over the bug", which is rather the
> opposite of a good justification.

Np. Will drop it and resend.

> (for example most SATA devices end up with a 64KB boundary mask, such that
> padding the IOVAs to provide the appropriate alignment happens very
> frequently, and they've been working just fine for years now)

Okay.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ