lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHbLzkrcNAV2=mJmLnd+98-Qj6MEKq_Z2Ci=txYVpC8WyWq1Gg@mail.gmail.com>
Date:   Tue, 11 Jan 2022 15:14:12 -0800
From:   Yang Shi <shy828301@...il.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     syzbot <syzbot+1f52b3a18d5633fa7f82@...kaller.appspotmail.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alistair Popple <apopple@...dia.com>,
        chinwen.chang@...iatek.com, fgheet255t@...il.com,
        Jann Horn <jannh@...gle.com>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>, Peter Xu <peterx@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        syzkaller-bugs@...glegroups.com, tonymarislogistics@...dex.com,
        Vlastimil Babka <vbabka@...e.cz>, Zi Yan <ziy@...dia.com>
Subject: Re: [syzbot] kernel BUG in __page_mapcount

On Wed, Jan 5, 2022 at 11:05 AM Yang Shi <shy828301@...il.com> wrote:
>
> On Tue, Dec 21, 2021 at 10:40 AM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Tue, Dec 21, 2021 at 10:24:27AM -0800, Yang Shi wrote:
> > > It seems the THP is split during smaps walk. The reproducer does call
> > > MADV_FREE on partial THP which may split the huge page.
> > >
> > > The below fix (untested) should be able to fix it.
> >
> > Did you read the rest of the thread on this?  If the page is being
>
> I just revisited this. Now I see what you mean about "the rest of the
> thread". My gmail client doesn't put them in the same thread, sigh...
>
> Yeah, try_get_compound_head() seems like the right way.
>
> Or we just simply treat migration entries as mapcount == 1 as Kirill
> suggested or just skip migration entries since they are transient or
> show migration entries separately.

I think Kirill's suggestion makes some sense. The migration entry's
mapcount is 0 so "pss /= mapcount" is not called at all, so the
migration entry is actually treated like mapcount == 1. This doesn't
change the behavior. Not like swap entry, we actually can't tell how
many references for the migration entry.

But we should handle private device entry differently since its
mapcount is inc'ed when it is shared between processes. The regular
migration entry could be identified by is_migration_entry() easily.
Using try_get_compound_head() seems overkilling IMHO.

I just came up with the below patch (built and running the producer
didn't trigger the bug for me so far). If it looks fine, I will submit
it in a formal patch with more comments.

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index ad667dbc96f5..6a48bbb51efa 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -429,7 +429,8 @@ static void smaps_page_accumulate(struct
mem_size_stats *mss,
 }

 static void smaps_account(struct mem_size_stats *mss, struct page *page,
-               bool compound, bool young, bool dirty, bool locked)
+               bool compound, bool young, bool dirty, bool locked,
+               bool migration)
 {
        int i, nr = compound ? compound_nr(page) : 1;
        unsigned long size = nr * PAGE_SIZE;
@@ -457,7 +458,7 @@ static void smaps_account(struct mem_size_stats
*mss, struct page *page,
         * If any subpage of the compound page mapped with PTE it would elevate
         * page_count().
         */
-       if (page_count(page) == 1) {
+       if ((page_count(page) == 1) || migration) {
                smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
                        locked, true);
                return;
@@ -506,6 +507,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
        struct vm_area_struct *vma = walk->vma;
        bool locked = !!(vma->vm_flags & VM_LOCKED);
        struct page *page = NULL;
+       bool migration = false;

        if (pte_present(*pte)) {
                page = vm_normal_page(vma, addr, *pte);
@@ -525,8 +527,11 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
                        } else {
                                mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT;
                        }
-               } else if (is_pfn_swap_entry(swpent))
+               } else if (is_pfn_swap_entry(swpent)) {
+                       if (is_migration_entry(swpent))
+                               migration = true;
                        page = pfn_swap_entry_to_page(swpent);
+               }
        } else {
                smaps_pte_hole_lookup(addr, walk);
                return;
@@ -535,7 +540,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
        if (!page)
                return;

-       smaps_account(mss, page, false, pte_young(*pte),
pte_dirty(*pte), locked);
+       smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte),
+                     locked, migration);
 }

 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -546,6 +552,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
        struct vm_area_struct *vma = walk->vma;
        bool locked = !!(vma->vm_flags & VM_LOCKED);
        struct page *page = NULL;
+       bool migration = false;

        if (pmd_present(*pmd)) {
                /* FOLL_DUMP will return -EFAULT on huge zero page */
@@ -553,8 +560,10 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
        } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
                swp_entry_t entry = pmd_to_swp_entry(*pmd);

-               if (is_migration_entry(entry))
+               if (is_migration_entry(entry)) {
+                       migration = true;
                        page = pfn_swap_entry_to_page(entry);
+               }
        }
        if (IS_ERR_OR_NULL(page))
                return;
@@ -566,7 +575,9 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
                /* pass */;
        else
                mss->file_thp += HPAGE_PMD_SIZE;
-       smaps_account(mss, page, true, pmd_young(*pmd),
pmd_dirty(*pmd), locked);
+
+       smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd),
+                     locked, migration);
 }
 #else
 static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,

>
>
> > migrated, we should still account it ... also, you've changed the
> > refcount, so this:
> >
> >         if (page_count(page) == 1) {
> >                 smaps_page_accumulate(mss, page, size, size << PSS_SHIFT, dirty,
> >                         locked, true);
> >                 return;
> >         }
> >
> > will never trigger.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ