lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <984c3737-3934-49bc-908e-8d67facd68a6@app.fastmail.com>
Date: Thu, 08 Jan 2026 10:55:31 +0100
From: "Arnd Bergmann" <arnd@...nel.org>
To: "Marek Szyprowski" <m.szyprowski@...sung.com>,
 "Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>, iommu@...ts.linux.dev,
 linux-kernel@...r.kernel.org
Cc: "Robin Murphy" <robin.murphy@....com>,
 "Linus Walleij" <linusw@...nel.org>, "Matthew Wilcox" <willy@...radead.org>,
 "Suzuki K Poulose" <suzuki.poulose@....com>
Subject: Re: [PATCH] dma-direct: Skip cache prep for HighMem coherent allocations

On Thu, Jan 8, 2026, at 09:38, Marek Szyprowski wrote:
> On 02.01.2026 16:51, Aneesh Kumar K.V (Arm) wrote:
>> dma_direct_alloc() calls arch_dma_prep_coherent() to clean any dirty
>> cache lines from the kernel linear alias before creating a coherent
>> remapping.
>>
>> HighMem pages have no kernel alias mapping, so there are no alias cache
>> lines to clean. Skip arch_dma_prep_coherent() for HighMem allocations.
>>
>> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
>
> Indeed this is an overhead to call prep for HighMem pages, but on the 
> other hand highmem support is being phased out according to 
> https://lwn.net/ml/all/20251219161559.556737-1-arnd@kernel.org/ Does it 
> make sense to apply this assuming that it will be removed soon?

I think "soon" is overstating what the plan is. With my proposed
series, the majority of current highmem users are changed to no
longer use it by default, but there are still three ways in which
users will get highmem for a number of years:

- anything with more than 2GB RAM inevitably uses highmem for
  the top portion of physical memory. These are much less common
  than 2GB systems, but are not going away soon.
- systems with sparse physical memory where the first and last page
  are more than 2GB apart currently still rely on highmem even
  if the total RAM is 2GB or less. This happens e.g. on Tegra114
  or RZ-G1H, IIRC. I have plans to address this in the future.
- Users that have applications using a lot of virtual memory
  still have the option to go back to old configurations with
  CONFIG_VMSPLIT_3G by selecting CONFIG_EXPERT.

On the other hand, my proposed change does mean that we have more
freedom to optimize for the non-highmem case. I think we can
require that the CMA area is in lowmem, and we can make the use
of highmem computationally more expensive if it helps simplify code.

      Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ