lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200316214248.GB18970@Asurada-Nvidia.nvidia.com>
Date:   Mon, 16 Mar 2020 14:42:49 -0700
From:   Nicolin Chen <nicoleotsuka@...il.com>
To:     Robin Murphy <robin.murphy@....com>
Cc:     Christoph Hellwig <hch@....de>, m.szyprowski@...sung.com,
        linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org
Subject: Re: [RFC][PATCH] dma-mapping: align default segment_boundary_mask
 with dma_mask

On Mon, Mar 16, 2020 at 01:16:16PM +0000, Robin Murphy wrote:
> On 2020-03-16 12:46 pm, Christoph Hellwig wrote:
> > On Mon, Mar 16, 2020 at 12:12:08PM +0000, Robin Murphy wrote:
> > > On 2020-03-14 12:00 am, Nicolin Chen wrote:
> > > > More and more drivers set dma_masks above DMA_BIT_MAKS(32) while
> > > > only a handful of drivers call dma_set_seg_boundary(). This means
> > > > that most drivers have a 4GB segmention boundary because DMA API
> > > > returns DMA_BIT_MAKS(32) as a default value, though they might be
> > > > able to handle things above 32-bit.
> > > 
> > > Don't assume the boundary mask and the DMA mask are related. There do exist
> > > devices which can DMA to a 64-bit address space in general, but due to
> > > descriptor formats/hardware design/whatever still require any single
> > > transfer not to cross some smaller boundary. XHCI is 64-bit yet requires
> > > most things not to cross a 64KB boundary. EHCI's 64-bit mode is an example
> > > of the 4GB boundary (not the best example, admittedly, but it undeniably
> > > exists).
> > 
> > Yes, which is what the boundary is for.  But why would we default to
> > something restrictive by default even if the driver didn't ask for it?
> 
> I've always assumed it was for the same reason as the 64KB segment length,
> i.e. it was sufficiently common as an actual restriction, but still "good
> enough" for everyone else. I remember digging up all the history to
> understand what these were about back when I implemented the map_sg stuff,
> and from that I'd imagine the actual values are somewhat biased towards SCSI
> HBAs, since they originated in the block and SCSI layers.

Yea, I did the same:

commit d22a6966b8029913fac37d078ab2403898d94c63
Author: FUJITA Tomonori <tomof@....org>
Date:   Mon Feb 4 22:28:13 2008 -0800

    iommu sg merging: add accessors for segment_boundary_mask in device_dma_parameters()

    This adds new accessors for segment_boundary_mask in device_dma_parameters
    structure in the same way I did for max_segment_size.  So we can easily change
    where to place struct device_dma_parameters in the future.

    dma_get_segment boundary returns 0xffffffff if dma_parms in struct device
    isn't set up properly.  0xffffffff is the default value used in the block
    layer and the scsi mid layer.

    Signed-off-by: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
    Cc: James Bottomley <James.Bottomley@...eleye.com>
    Cc: Jens Axboe <jens.axboe@...cle.com>
    Cc: Greg KH <greg@...ah.com>
    Cc: Jeff Garzik <jeff@...zik.org>
    Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ