lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 12 Oct 2020 09:51:58 +1100
From:   Stephen Rothwell <sfr@...b.auug.org.au>
To:     Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>, Arnd Bergmann <arnd@...db.de>
Cc:     Jean-Philippe Brucker <jean-philippe@...aro.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Next Mailing List <linux-next@...r.kernel.org>,
        Nicholas Piggin <npiggin@...il.com>
Subject: linux-next: manual merge of the arm64 tree with the asm-generic
 tree

Hi all,

Today's linux-next merge of the arm64 tree got a conflict in:

  arch/arm64/include/asm/mmu_context.h

between commit:

  f911c2a7c096 ("arm64: use asm-generic/mmu_context.h for no-op implementations")

from the asm-generic tree and commit:

  48118151d8cc ("arm64: mm: Pin down ASIDs for sharing mm with devices")

from the arm64 tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc arch/arm64/include/asm/mmu_context.h
index fe2862aa1dad,0672236e1aea..000000000000
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@@ -174,9 -174,16 +174,15 @@@ static inline void cpu_replace_ttbr1(pg
   * Setting a reserved TTBR0 or EPD0 would work, but it all gets ugly when you
   * take CPU migration into account.
   */
 -#define destroy_context(mm)		do { } while(0)
  void check_and_switch_context(struct mm_struct *mm);
  
- #define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.id, 0); 0; })
+ static inline int
+ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+ {
+ 	atomic64_set(&mm->context.id, 0);
+ 	refcount_set(&mm->context.pinned, 0);
+ 	return 0;
+ }
  
  #ifdef CONFIG_ARM64_SW_TTBR0_PAN
  static inline void update_saved_ttbr0(struct task_struct *tsk,
@@@ -245,8 -251,12 +251,11 @@@ switch_mm(struct mm_struct *prev, struc
  void verify_cpu_asid_bits(void);
  void post_ttbr_update_workaround(void);
  
+ unsigned long arm64_mm_context_get(struct mm_struct *mm);
+ void arm64_mm_context_put(struct mm_struct *mm);
+ 
 +#include <asm-generic/mmu_context.h>
 +
  #endif /* !__ASSEMBLY__ */
  
  #endif /* !__ASM_MMU_CONTEXT_H */

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ