lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Mon, 23 Dec 2013 14:58:50 +1100
From:	Stephen Rothwell <sfr@...b.auug.org.au>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-next@...r.kernel.org, linux-kernel@...r.kernel.org,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: linux-next: manual merge of the akpm-current tree with Linus' tree

Hi Andrew,

Today's linux-next merge of the akpm-current tree got conflicts in
include/linux/mm.h and mm/memory.c between commit 597d795a2a78 ("mm: do
not allocate page->ptl dynamically, if spinlock_t fits to long") from
the  tree and commit 489bd4be2d70 ("mm: create a separate slab for
page->ptl allocation") from the akpm-current tree.

I fixed it up (see below) and can carry the fix as necessary (no action
is required).

-- 
Cheers,
Stephen Rothwell                    sfr@...b.auug.org.au

diff --cc include/linux/mm.h
index bf362d053ce1,1f232229a451..000000000000
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@@ -1317,7 -1355,8 +1355,8 @@@ static inline pmd_t *pmd_alloc(struct m
  #endif /* CONFIG_MMU && !__ARCH_HAS_4LEVEL_HACK */
  
  #if USE_SPLIT_PTE_PTLOCKS
 -#if BLOATED_SPINLOCKS
 +#if ALLOC_SPLIT_PTLOCKS
+ void __init ptlock_cache_init(void);
  extern bool ptlock_alloc(struct page *page);
  extern void ptlock_free(struct page *page);
  
@@@ -1325,7 -1364,8 +1364,8 @@@ static inline spinlock_t *ptlock_ptr(st
  {
  	return page->ptl;
  }
 -#else /* BLOATED_SPINLOCKS */
 +#else /* ALLOC_SPLIT_PTLOCKS */
+ static inline void ptlock_cache_init(void) {}
  static inline bool ptlock_alloc(struct page *page)
  {
  	return true;
diff --cc mm/memory.c
index 6768ce9e57d2,cf6098c10084..000000000000
--- a/mm/memory.c
+++ b/mm/memory.c
@@@ -4271,7 -4271,14 +4271,14 @@@ void copy_user_huge_page(struct page *d
  }
  #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
  
 -#if USE_SPLIT_PTE_PTLOCKS && BLOATED_SPINLOCKS
 +#if USE_SPLIT_PTE_PTLOCKS && ALLOC_SPLIT_PTLOCKS
+ static struct kmem_cache *page_ptl_cachep;
+ void __init ptlock_cache_init(void)
+ {
+ 	page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(spinlock_t), 0,
+ 			SLAB_PANIC, NULL);
+ }
+ 
  bool ptlock_alloc(struct page *page)
  {
  	spinlock_t *ptl;

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ