lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Oct 2021 09:42:21 +1100
From:   Stephen Rothwell <sfr@...b.auug.org.au>
To:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>
Cc:     Anshuman Khandual <anshuman.khandual@....com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        Mike Kravetz <mike.kravetz@...cle.com>
Subject: linux-next: manual merge of the arm64 tree with the arm64-fixes
 tree

Hi all,

Today's linux-next merge of the arm64 tree got a conflict in:

  arch/arm64/mm/hugetlbpage.c

between commit:

  0350419b14b9 ("arm64/hugetlb: fix CMA gigantic page order for non-4K PAGE_SIZE")

from the arm64-fixes tree and commit:

  f8b46c4b51ab ("arm64/mm: Add pud_sect_supported()")

from the arm64 tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc arch/arm64/mm/hugetlbpage.c
index a8158c948966,029cf5e42c4c..000000000000
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@@ -40,11 -40,10 +40,10 @@@ void __init arm64_hugetlb_cma_reserve(v
  {
  	int order;
  
- #ifdef CONFIG_ARM64_4K_PAGES
- 	order = PUD_SHIFT - PAGE_SHIFT;
- #else
- 	order = CONT_PMD_SHIFT - PAGE_SHIFT;
- #endif
+ 	if (pud_sect_supported())
+ 		order = PUD_SHIFT - PAGE_SHIFT;
+ 	else
 -		order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;
++		order = CONT_PMD_SHIFT - PAGE_SHIFT;
  	/*
  	 * HugeTLB CMA reservation is required for gigantic
  	 * huge pages which could not be allocated via the

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ