[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1447112591.21443.35.camel@hpe.com>
Date: Mon, 09 Nov 2015 16:43:11 -0700
From: Toshi Kani <toshi.kani@....com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
hpa@...or.com, tglx@...utronix.de, mingo@...hat.com,
akpm@...ux-foundation.org
Cc: bp@...en8.de, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
x86@...nel.org, jgross@...e.com, konrad.wilk@...cle.com,
elliott@....com, boris.ostrovsky@...cle.com
Subject: Re: [PATCH] x86/mm: fix regression with huge pages on PAE
On Tue, 2015-11-10 at 01:18 +0200, Kirill A. Shutemov wrote:
> Recent PAT patchset has caused issue on 32-bit PAE machines:
:
> The problem is in pmd_pfn_mask() and pmd_flags_mask(). These helpers use
> PMD_PAGE_MASK to calculate resulting mask. PMD_PAGE_MASK is 'unsigned
> long', not 'unsigned long long' as physaddr_t. As result upper bits of
> resulting mask is truncated.
>
> The patch reworks code to use PMD_SHIFT as base of mask calculation
> instead of PMD_PAGE_MASK.
>
> pud_pfn_mask() and pud_flags_mask() aren't problematic since we don't
> have PUD page table level on 32-bit systems, but they reworked too to be
> consistent with PMD counterpart.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Reported-and-Tested-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
> Fixes: f70abb0fc3da ("x86/asm: Fix pud/pmd interfaces to handle large PAT
> bit")
> Cc: Toshi Kani <toshi.kani@....com>
> ---
> arch/x86/include/asm/pgtable_types.h | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/include/asm/pgtable_types.h
> b/arch/x86/include/asm/pgtable_types.h
> index dd5b0aa9dd2f..c1e797266ce9 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -279,17 +279,14 @@ static inline pmdval_t native_pmd_val(pmd_t pmd)
> static inline pudval_t pud_pfn_mask(pud_t pud)
> {
> if (native_pud_val(pud) & _PAGE_PSE)
> - return PUD_PAGE_MASK & PHYSICAL_PAGE_MASK;
> + return ~((1ULL << PUD_SHIFT) - 1) & PHYSICAL_PAGE_MASK;
Thanks for the fix! Should we fix the PMD/PUD MASK/SIZE macros, so that we do
not hit the same issue again when they are used?
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -17,10 +17,10 @@
(ie, 32-bit PAE). */
#define PHYSICAL_PAGE_MASK (((signed long)PAGE_MASK) & __PHYSICAL_MASK)
-#define PMD_PAGE_SIZE (_AC(1, UL) << PMD_SHIFT)
+#define PMD_PAGE_SIZE (_AC(1, ULL) << PMD_SHIFT)
#define PMD_PAGE_MASK (~(PMD_PAGE_SIZE-1))
-#define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT)
+#define PUD_PAGE_SIZE (_AC(1, ULL) << PUD_SHIFT)
#define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1))
Thanks,
-Toshi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists