lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkqAM-2Dg-JSy8Yqq99ch39BeSEnxPkmRg2BrhTF1M1N2A@mail.gmail.com>
Date:   Fri, 30 Apr 2021 15:55:54 -0700
From:   Yang Shi <shy828301@...il.com>
To:     Zi Yan <ziy@...dia.com>
Cc:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Wang Yugui <wangyugui@...-tech.com>,
        Hugh Dickins <hughd@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: thp: check total_mapcount instead of page_mapcount

On Fri, Apr 30, 2021 at 3:30 PM Zi Yan <ziy@...dia.com> wrote:
>
> On 30 Apr 2021, at 17:56, Yang Shi wrote:
>
> > On Fri, Apr 30, 2021 at 2:30 PM Zi Yan <ziy@...dia.com> wrote:
> >>
> >> On 30 Apr 2021, at 17:07, Yang Shi wrote:
> >>
> >>> When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
> >>> return false positive for PTE-mapped THP since page_mapcount() is used
> >>> to check if the THP is unmapped, but it just checks compound mapount and
> >>> head page's mapcount.  If the THP is PTE-mapped and head page is not
> >>> mapped, it may return false positive.
> >>>
> >>> Use total_mapcount() instead of page_mapcount() and do so for the
> >>> VM_BUG_ON_PAGE in split_huge_page_to_list as well.
> >>>
> >>> [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
> >>>
> >>> Signed-off-by: Yang Shi <shy828301@...il.com>
> >>> ---
> >>>  mm/huge_memory.c | 2 +-
> >>>  mm/rmap.c        | 2 +-
> >>>  2 files changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>> index 63ed6b25deaa..2122c3e853b9 100644
> >>> --- a/mm/huge_memory.c
> >>> +++ b/mm/huge_memory.c
> >>> @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >>>       }
> >>>
> >>>       unmap_page(head);
> >>> -     VM_BUG_ON_PAGE(compound_mapcount(head), head);
> >>> +     VM_BUG_ON_PAGE(total_mapcount(head), head);
> >>
> >> I am not sure about this change. The code below also checks total_mapcount(head)
> >> and returns EBUSY if the count is non-zero. This change makes the code dead.
> >
> > It is actually dead if CONFIG_DEBUG_VM is enabled and total_mapcount
> > is not 0 regardless of this change due to the below code, right?
> >
> > if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
> >                         pr_alert("total_mapcount: %u, page_count(): %u\n",
> >                                         mapcount, count);
> >                         if (PageTail(page))
> >                                 dump_page(head, NULL);
> >                         dump_page(page, "total_mapcount(head) > 0");
> >                         BUG();
> >                 }
>
> Right. But with this change, mapcount will never be non-zero. The code above
> will be useless and can be removed.

Yes, you are correct.

>
> >> On the other hand, the change will force all mappings to the page have to be
> >> successfully unmapped all the time. I am not sure if we want to do that.
> >> Maybe it is better to just check total_mapcount() and fail the split.
> >> The same situation happens with the code change below.
> >
> > IIUC, the code did force all mappings to the page to be unmapped in
> > order to split it.
> >>
> >>>
> >>>       /* block interrupt reentry in xa_lock and spinlock */
> >>>       local_irq_disable();
> >>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>> index 693a610e181d..2e547378ab5f 100644
> >>> --- a/mm/rmap.c
> >>> +++ b/mm/rmap.c
> >>> @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
> >>>       else
> >>>               rmap_walk(page, &rwc);
> >>>
> >>> -     return !page_mapcount(page) ? true : false;
> >>> +     return !total_mapcount(page) ? true : false;
> >>>  }
> >>
> >> In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
> >> to the page have to be all unmapped, which might not be the case we want.
> >
> > AFAICT, I don't see such a case from all the callers of
> > try_to_unmap(). Imay miss something, but I do have a hard time
> > thinking of a usecase which can proceed safely with "not fully
> > unmapped" page.
>
> This code change is correct, but after the change unmap_page() will fire VM_BUG_ON
> when not all mappings are unmapped. Along with the change above, we will have
> two identical VM_BUG_ONs happen one after another. We might want to remove one
> of them.

Yes. I'd prefer keep the one after unmap_page() since it seems more
obvious. Any objection?

>
> Also, this changes the semantics of try_to_unmap. The comment for try_to_unmap
> might need to be updated.

What comment do you refer to?

>
>
> —
> Best Regards,
> Yan Zi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ