lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4802e9fd-733f-3246-92f3-05f590e05d37@arm.com>
Date:   Mon, 28 Feb 2022 10:32:54 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Christoph Hellwig <hch@....de>, iommu@...ts.linux-foundation.org
Cc:     joro@...tes.org, will@...nel.org, x86@...nel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] dma-mapping: remove CONFIG_DMA_REMAP

On 2022-02-27 14:35, Christoph Hellwig wrote:
> CONFIG_DMA_REMAP is used to build a few helpers around the core
> vmalloc code, and to use them in case there is a highmem page in
> dma-direct, and to make dma coherent allocations be able to use
> non-contiguous pages allocations for DMA allocations in the dma-iommu
> layer.
> 
> Right now it needs to be explicitly selected by architectures, and
> is only done so by architectures that require remapping to deal
> with devices that are not DMA coherent.  Make it unconditional for
> builds with CONFIG_MMU as it is very little extra code, but makes
> it much more likely that large DMA allocations succeed on x86.
> 
> This fixes hot plugging a NVMe thunderbolt SSD for me, which tries
> to allocate a 1MB buffer that is otherwise hard to obtain due to
> memory fragmentation on a heavily used laptop.

Simplifying the maze is most welcome, however one thing stands out...

[...]
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 50f48e9e45987..fe1682fecdd57 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -269,10 +269,10 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   		/*
>   		 * Depending on the cma= arguments and per-arch setup,
>   		 * dma_alloc_contiguous could return highmem pages.
> -		 * Without remapping there is no way to return them here, so
> -		 * log an error and fail.
> +		 * Without MMU-based remapping there is no way to return them
> +		 * here, so log an error and fail.
>   		 */
> -		if (!IS_ENABLED(CONFIG_DMA_REMAP)) {
> +		if (!IS_ENABLED(CONFIG_MMU)) {
>   			dev_info(dev, "Rejecting highmem page from CMA.\n");
>   			goto out_free_pages;
>   		}

Is it even possible to hit this case now? From a quick look, all the 
architectures defining HIGHMEM either have an explicit dependency on MMU 
or don't allow deselecting it anyway (plus I don't see how HIGHMEM && 
!MMU could work in general), so I'm pretty sure this whole chunk should 
go away now.

With that (or if there *is* some subtle wacky case where PageHighmem() 
can actually return true for !MMU, a comment to remind us in future),

Reviewed-by: Robin Murphy <robin.murphy@....com>

Cheers,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ