lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+G9fYvjKGF3HZXyd=JQHzRG=r=bmD0hYQn02VL4Y=5y57OgaA@mail.gmail.com>
Date:   Tue, 25 Aug 2020 13:03:53 +0530
From:   Naresh Kamboju <naresh.kamboju@...aro.org>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-mm <linux-mm@...ck.org>,
        Linux-Next Mailing List <linux-next@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>,
        lkft-triage@...ts.linaro.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        LTP List <ltp@...ts.linux.it>, Arnd Bergmann <arnd@...db.de>,
        Russell King - ARM Linux <linux@...linux.org.uk>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        Catalin Marinas <catalin.marinas@....com>,
        Christoph Hellwig <hch@....de>,
        Andy Lutomirski <luto@...nel.org>,
        Peter Xu <peterx@...hat.com>, opendmb@...il.com,
        Linus Walleij <linus.walleij@...aro.org>,
        afzal.mohd.ma@...il.com, Will Deacon <will@...nel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: BUG: Bad page state in process true pfn:a8fed on arm

On Mon, 24 Aug 2020 at 16:36, Matthew Wilcox <willy@...radead.org> wrote:
>
> On Mon, Aug 24, 2020 at 03:14:55PM +0530, Naresh Kamboju wrote:
> > [   67.545247] BUG: Bad page state in process true  pfn:a8fed
> > [   67.550767] page:9640c0ab refcount:0 mapcount:-1024
>
> Somebody freed a page table without calling __ClearPageTable() on it.

After running git bisect on this problem,
The first suspecting of this problem on arm architecture this patch.
424efe723f7717430bec7c93b4d28bba73e31cf6
("mm: account PMD tables like PTE tables ")

Reported-by: Naresh Kamboju <naresh.kamboju@...aro.org>
Reported-by: Anders Roxell <anders.roxell@...aro.org>

Additional information:
We have tested linux next by reverting this patch and confirmed
that the reported BUG is not reproduced.

These configs enabled on the running device,

CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_MADVISE=y


-- Suspecting patch --
commit 424efe723f7717430bec7c93b4d28bba73e31cf6
Author: Matthew Wilcox <willy@...radead.org>
Date:   Thu Aug 20 10:01:30 2020 +1000

    mm: account PMD tables like PTE tables

    We account the PTE level of the page tables to the process in order to
    make smarter OOM decisions and help diagnose why memory is fragmented.
    For these same reasons, we should account pages allocated for PMDs.  With
    larger process address spaces and ASLR, the number of PMDs in use is
    higher than it used to be so the inaccuracy is starting to matter.

    Link: http://lkml.kernel.org/r/20200627184642.GF25039@casper.infradead.org
    Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
    Reviewed-by: Mike Rapoport <rppt@...ux.ibm.com>
    Cc: Abdul Haleem <abdhalee@...ux.vnet.ibm.com>
    Cc: Andy Lutomirski <luto@...nel.org>
    Cc: Arnd Bergmann <arnd@...db.de>
    Cc: Christophe Leroy <christophe.leroy@...roup.eu>
    Cc: Joerg Roedel <joro@...tes.org>
    Cc: Max Filippov <jcmvbkbc@...il.com>
    Cc: Peter Zijlstra <peterz@...radead.org>
    Cc: Satheesh Rajendran <sathnaga@...ux.vnet.ibm.com>
    Cc: Stafford Horne <shorne@...il.com>
    Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
    Signed-off-by: Stephen Rothwell <sfr@...b.auug.org.au>

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b0a15ee77b8a..a4e5b806347c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2239,7 +2239,7 @@ static inline spinlock_t *pmd_lockptr(struct
mm_struct *mm, pmd_t *pmd)
  return ptlock_ptr(pmd_to_page(pmd));
 }

-static inline bool pgtable_pmd_page_ctor(struct page *page)
+static inline bool pmd_ptlock_init(struct page *page)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
  page->pmd_huge_pte = NULL;
@@ -2247,7 +2247,7 @@ static inline bool pgtable_pmd_page_ctor(struct
page *page)
  return ptlock_init(page);
 }

-static inline void pgtable_pmd_page_dtor(struct page *page)
+static inline void pmd_ptlock_free(struct page *page)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
  VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
@@ -2264,8 +2264,8 @@ static inline spinlock_t *pmd_lockptr(struct
mm_struct *mm, pmd_t *pmd)
  return &mm->page_table_lock;
 }

-static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; }
-static inline void pgtable_pmd_page_dtor(struct page *page) {}
+static inline bool pmd_ptlock_init(struct page *page) { return true; }
+static inline void pmd_ptlock_free(struct page *page) {}

 #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte)

@@ -2278,6 +2278,22 @@ static inline spinlock_t *pmd_lock(struct
mm_struct *mm, pmd_t *pmd)
  return ptl;
 }

+static inline bool pgtable_pmd_page_ctor(struct page *page)
+{
+ if (!pmd_ptlock_init(page))
+ return false;
+ __SetPageTable(page);
+ inc_zone_page_state(page, NR_PAGETABLE);
+ return true;
+}
+
+static inline void pgtable_pmd_page_dtor(struct page *page)
+{
+ pmd_ptlock_free(page);
+ __ClearPageTable(page);
+ dec_zone_page_state(page, NR_PAGETABLE);
+}
+
 /*
  * No scalability reason to split PUD locks yet, but follow the same pattern
  * as the PMD locks to make it easier if we decide to.  The VM should not be




- Naresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ