lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YZ1lOgjv6r+ZOSRX@casper.infradead.org>
Date:   Tue, 23 Nov 2021 22:03:38 +0000
From:   Matthew Wilcox <willy@...radead.org>
To:     Mina Almasry <almasrymina@...gle.com>
Cc:     Jonathan Corbet <corbet@....net>,
        David Hildenbrand <david@...hat.com>,
        "Paul E . McKenney" <paulmckrcu@...com>,
        Yu Zhao <yuzhao@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Xu <peterx@...hat.com>,
        Ivan Teterevkov <ivan.teterevkov@...anix.com>,
        Florian Schmidt <florian.schmidt@...anix.com>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH v7] mm: Add PM_THP_MAPPED to /proc/pid/pagemap

On Tue, Nov 23, 2021 at 01:47:33PM -0800, Mina Almasry wrote:
> On Tue, Nov 23, 2021 at 1:30 PM Matthew Wilcox <willy@...radead.org> wrote:
> > What I've been trying to communicate over the N reviews of this
> > patch series is that *the same thing is about to happen to THPs*.
> > Only more so.  THPs are going to be of arbitrary power-of-two size, not
> > necessarily sizes supported by the hardware.  That means that we need to
> > be extremely precise about what we mean by "is this a THP?"  Do we just
> > mean "This is a compound page?"  Do we mean "this is mapped by a PMD?"
> > Or do we mean something else?  And I feel like I haven't been able to
> > get that information out of you.
> 
> Yes, I'm very sorry for the trouble, but I'm also confused what the
> disconnect is. To allocate hugepages I can do like so:
> 
> mount -t tmpfs -o huge=always tmpfs /mnt/mytmpfs
> 
> or
> 
> madvise(..., MADV_HUGEPAGE)
> 
> Note I don't ask the kernel for a specific size, or a specific mapping
> mechanism (PMD/contig PTE/contig PMD/PUD), I just ask the kernel for
> 'huge' pages. I would like to know whether the kernel was successful
> in allocating a hugepage or not. Today a THP hugepage AFAICT is PMD
> mapped + is_transparent_hugepage(), which is the check I have here. In
> the future, THP may become an arbitrary power of two size, and I think
> I'll need to update this querying interface once/if that gets merged
> to the kernel. I.e, if in the future I allocate pages by using:
> 
> mount -t tmpfs -o huge=2MB tmpfs /mnt/mytmpfs
> 
> I need the kernel to tell me whether the mapping is 2MB size or not.
> 
> If I allocate pages by using:
> 
> mount -t tmpfs -o huge=pmd tmpfs /mnt/mytmps,
> 
> Then I need the kernel to tell me whether the pages are PMD mapped or
> not, as I'm doing here.
> 
> The current implementation is based on what the current THP
> implementation is in the kernel, and depending on future changes to
> THP I may need to update it in the future. Does that make sense?

Well, no.  You're adding (or changing, if you like) a userspace API.
We need to be precise about what that userspace API *means*, so that we
don't break it in the future when the implementation changes.  You're
still being fuzzy above.

I have no intention of adding an API like the ones you suggest above to
allow the user to specify what size pages to use.  That seems very strange
to me; how should the user (or sysadmin, or application) know what size is
best for the kernel to use to cache files?  Instead, the kernel observes
the usage pattern of the file (through the readahead mechanism) and grows
the allocation size to fit what the kernel thinks will be most effective.

I do honour some of the existing hints that userspace can provide; eg
VM_HUGEPAGE makes the pagefault path allocate PMD sized pages (if it can).
But there's intentionally no new way to tell the kernel to use pages
of a particular size.  The current implementation will use (at least)
64kB pages if you do reads in 64kB chunks, but that's not guaranteed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ