lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Dec 2020 19:56:54 +1100
From:   Stephen Rothwell <sfr@...b.auug.org.au>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     Jason Gunthorpe <jgg@...dia.com>, Jason Gunthorpe <jgg@...pe.ca>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux Next Mailing List <linux-next@...r.kernel.org>
Subject: linux-next: manual merge of the akpm-current tree with the tip tree

Hi all,

Today's linux-next merge of the akpm-current tree got a conflict in:

  mm/gup.c

between commit:

  2a4a06da8a4b ("mm/gup: Provide gup_get_pte() more generic")

from the tip tree and commit:

  1eb2fe862a51 ("mm/gup: combine put_compound_head() and unpin_user_page()")

from the akpm-current tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc mm/gup.c
index 44b0c6b89602,b3d852b4a60c..000000000000
--- a/mm/gup.c
+++ b/mm/gup.c
@@@ -2062,29 -1977,62 +1977,6 @@@ EXPORT_SYMBOL(get_user_pages_unlocked)
   * This code is based heavily on the PowerPC implementation by Nick Piggin.
   */
  #ifdef CONFIG_HAVE_FAST_GUP
 -#ifdef CONFIG_GUP_GET_PTE_LOW_HIGH
--
- static void put_compound_head(struct page *page, int refs, unsigned int flags)
 -/*
 - * WARNING: only to be used in the get_user_pages_fast() implementation.
 - *
 - * With get_user_pages_fast(), we walk down the pagetables without taking any
 - * locks.  For this we would like to load the pointers atomically, but sometimes
 - * that is not possible (e.g. without expensive cmpxchg8b on x86_32 PAE).  What
 - * we do have is the guarantee that a PTE will only either go from not present
 - * to present, or present to not present or both -- it will not switch to a
 - * completely different present page without a TLB flush in between; something
 - * that we are blocking by holding interrupts off.
 - *
 - * Setting ptes from not present to present goes:
 - *
 - *   ptep->pte_high = h;
 - *   smp_wmb();
 - *   ptep->pte_low = l;
 - *
 - * And present to not present goes:
 - *
 - *   ptep->pte_low = 0;
 - *   smp_wmb();
 - *   ptep->pte_high = 0;
 - *
 - * We must ensure here that the load of pte_low sees 'l' IFF pte_high sees 'h'.
 - * We load pte_high *after* loading pte_low, which ensures we don't see an older
 - * value of pte_high.  *Then* we recheck pte_low, which ensures that we haven't
 - * picked up a changed pte high. We might have gotten rubbish values from
 - * pte_low and pte_high, but we are guaranteed that pte_low will not have the
 - * present bit set *unless* it is 'l'. Because get_user_pages_fast() only
 - * operates on present ptes we're safe.
 - */
 -static inline pte_t gup_get_pte(pte_t *ptep)
--{
- 	if (flags & FOLL_PIN) {
- 		mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_RELEASED,
- 				    refs);
 -	pte_t pte;
--
- 		if (hpage_pincount_available(page))
- 			hpage_pincount_sub(page, refs);
- 		else
- 			refs *= GUP_PIN_COUNTING_BIAS;
- 	}
 -	do {
 -		pte.pte_low = ptep->pte_low;
 -		smp_rmb();
 -		pte.pte_high = ptep->pte_high;
 -		smp_rmb();
 -	} while (unlikely(pte.pte_low != ptep->pte_low));
--
- 	VM_BUG_ON_PAGE(page_ref_count(page) < refs, page);
- 	/*
- 	 * Calling put_page() for each ref is unnecessarily slow. Only the last
- 	 * ref needs a put_page().
- 	 */
- 	if (refs > 1)
- 		page_ref_sub(page, refs - 1);
- 	put_page(page);
 -	return pte;
 -}
 -#else /* CONFIG_GUP_GET_PTE_LOW_HIGH */
 -/*
 - * We require that the PTE can be read atomically.
 - */
 -static inline pte_t gup_get_pte(pte_t *ptep)
 -{
 -	return ptep_get(ptep);
--}
 -#endif /* CONFIG_GUP_GET_PTE_LOW_HIGH */
--
  static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
  					    unsigned int flags,
  					    struct page **pages)

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ