lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HUdg1Fr=tLEQRkDjeTzNzzSM6EPhvDgzURxSZSBMLgjoQ@mail.gmail.com>
Date:   Tue, 17 Jan 2023 15:11:01 -0800
From:   James Houghton <jthoughton@...gle.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Peter Xu <peterx@...hat.com>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Muchun Song <songmuchun@...edance.com>,
        David Rientjes <rientjes@...gle.com>,
        Axel Rasmussen <axelrasmussen@...gle.com>,
        Mina Almasry <almasrymina@...gle.com>,
        "Zach O'Keefe" <zokeefe@...gle.com>,
        Manish Mishra <manish.mishra@...anix.com>,
        Naoya Horiguchi <naoya.horiguchi@....com>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Yang Shi <shy828301@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range

On Mon, Jan 16, 2023 at 2:17 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 12.01.23 22:33, Peter Xu wrote:
> > On Thu, Jan 12, 2023 at 04:17:52PM -0500, James Houghton wrote:
> >> I'll look into it, but doing it this way will use _mapcount, so we
> >> won't be able to use the vmemmap optimization. I think even if we do
> >> use Hugh's approach, refcount is still being kept on the head page, so
> >> there's still an overflow risk there (but maybe I am
> >> misunderstanding).
> >
> > Could you remind me what's the issue if using refcount on the small pages
> > rather than the head (assuming vmemmap still can be disabled)?
>
> The THP-way of doing things is refcounting on the head page. All folios
> use a single refcount on the head.
>
> There has to be a pretty good reason to do it differently.

Peter and I have discussed this a lot offline. There are two main problems here:

1. Refcount overflow

Refcount is always kept on the head page (before and after this
series). IIUC, this means that if THPs could be 1G in size, they too
would be susceptible to the same potential overflow. How easy is the
overflow? [1]

To deal with this, the best solution we've been able to come up with
is to check if refcount is > INT_MAX/2 (similar to try_get_page()),
and if it is, stop the operation (UFFDIO_CONTINUE or a page fault)
from proceeding. In the UFFDIO_CONTINUE case, return ENOMEM. In the
page fault cause, return VM_FAULT_SIGBUS (not VM_FAULT_OOM; we don't
want to kill a random process).

(So David, I think this answers your question. Refcount should be
handled just like THPs.)

2. page_mapcount() API differences

In this series, page_mapcount() returns the total number of page table
references for the compound page. For example, if you have a
PTE-mapped 2M page (with no other mappings), page_mapcount() for each
4K page will be 512. This is not the same as a THP: page_mapcount()
would return 1 for each page. Because of the difference in
page_mapcount(), we have 4 problems:

i. Smaps uses page_mapcount() >= 2 to determine if hugetlb memory is
"private_hugetlb" or "shared_hugetlb".
ii. Migration with MPOL_MF_MOVE will check page_mapcount() to see if
the hugepage is shared or not. Pages that would otherwise be migrated
now require MPOL_MF_MOVE_ALL to be migrated.
[Really both of the above are checking how many VMAs are mapping our hugepage.]
iii. CoW. This isn't a problem right now because CoW is only possible
with MAP_PRIVATE VMAs and HGM can only be enabled for MAP_SHARED VMAs.
iv. The hwpoison handling code will check if it successfully unmapped
the poisoned page. This isn't a problem right now, as hwpoison will
unmap all the mappings for the hugepage, not just the 4K where the
poison was found.

Doing it this way allows HGM to remain compatible with the hugetlb
vmemmap optimization. None of the above problems strike me as
particularly major, but it's unclear to me how important it is to have
page_mapcount() have a consistent meaning for hugetlb vs non-hugetlb.

The other way page_mapcount() (let's say the "THP-like way") could be
done is like this: increment compound mapcount if we're mapping a
hugetlb page normally (e.g., 1G page with a PUD). If we're mapping at
high-granularity, increment the mapcount for each 4K page that is
getting mapped (e.g., PMD within a 1G page: increment the mapcount for
the 512 pages that are now mapped). This yields the same
page_mapcount() API we had before, but we lose the hugetlb vmemmap
optimization.

We could introduce an API like hugetlb_vma_mapcount() that would, for
hugetlb, give us the number of VMAs that map a hugepage, but I don't
think people would like this.

I'm curious what others think (Mike, Matthew?). I'm guessing the
THP-like way is probably what most people would want, though it would
be a real shame to lose the vmemmap optimization.

- James

[1] INT_MAX is 2^31. We increment the refcount once for each 4K
mapping in 1G: that's 512 * 512 (2^18). That means we only need 8192
(2^13) VMAs for the same 1G page to overflow refcount. I think it's
safe to say that if userspace is doing this, they are attempting to
overflow refcount.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ