lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3F4CC516-21B5-4E9F-8427-E54B4D0F2E41@fb.com>
Date:   Mon, 22 Aug 2022 17:24:22 +0000
From:   "Alex Zhu (Kernel)" <alexlzhu@...com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Kernel Team <Kernel-team@...com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] mm: add thp_utilization metrics to debugfs



> On Aug 20, 2022, at 3:07 PM, Andrew Morton <akpm@...ux-foundation.org> wrote:
> 
> > 
> On Wed, 17 Aug 2022 17:01:12 -0700 <alexlzhu@...com> wrote:
> 
>> THPs have historically been enabled on a per application basis due to
>> performance increase or decrease depending on how the particular
>> application uses physical memory. When THPs are heavily utilized,
>> application performance improves due to fewer TLB cache misses.
>> It has long been suspected that performance regressions when THP
>> is enabled happens due to heavily underutilized anonymous THPs.
>> 
>> Previously there was no way to track how much of a THP is
>> actually being used. With this change, we seek to gain visibility
>> into the utilization of THPs in order to make more intelligent
>> decisions regarding paging.
>> 
>> This change introduces a tool that scans through all of physical
>> memory for anonymous THPs and groups them into buckets based
>> on utilization. It also includes an interface under
>> /sys/kernel/debug/thp_utilization.
>> 
>> Utilization of a THP is defined as the percentage of nonzero
>> pages in the THP. The worker thread will scan through all
>> of physical memory and obtain utilization of all anonymous
>> THPs. It will gather this information by periodically scanning
>> through all of physical memory for anonymous THPs, group them
>> into buckets based on utilization, and report utilization
> 
> I'd like to see sample debugfs output right here in the changelog, for
> reviewers to review.  In some detail.

I should have included that in the description, sorry about that. A sample output: 

Utilized[0-50]: 1331 680884
Utilized[51-101]: 9 3983
Utilized[102-152]: 3 1187
Utilized[153-203]: 0 0
Utilized[204-255]: 2 539
Utilized[256-306]: 5 1135
Utilized[307-357]: 1 192
Utilized[358-408]: 0 0
Utilized[409-459]: 1 57
Utilized[460-512]: 400 13
Last Scan Time: 223.98
Last Scan Duration: 70.65

This indicates that there are 1331 THPs that have between 0 and 50
utilized (non zero) pages. In total there are 680884 zero pages in
this utilization bucket. THPs in the [0-50] bucket compose 76% of total
THPs, and are responsible for 99% of total zero pages across all
THPs. In other words, the least utilized THPs are responsible for almost
all of the memory waste when THP is always enabled. Similar results
have been observed across production workloads. 

The last two lines indicate the timestamp and duration of the most recent
scan through all of physical memory. Here we see that the last scan took
70.65 seconds. 
> 
> And I'd like to see the code commented!  Especially
> thp_utilization_workfn(), thp_util_scan() and thp_scan_next_zone(). 
> What are their roles and responsibilities?  How long do they take, by
> what means do they scan?
> 
> I mean, scanning all of physical memory is a huge task.  How do we
> avoid chewing vast amounts of CPU?  What is the chosen approach and
> what are the tradeoffs?  Why is is done within a kernel thread at all,
> rather than putting the load into the context of the reader of the
> stats (which is more appropriate).  etcetera.  There are many traps,
> tradeoffs and hidden design decisions here.  Please unhide them.
> 
> This comment, which is rather a core part of these tradeoffs:
> 

> +/*
> + * The number of addresses to scan through on each periodic
> + * run of the scanner that generates /sys/kernel/debug/thp_utilization.
> + */
> +#define THP_UTIL_SCAN_SIZE 256
> 
> isn't very helpful.  "number of addresses"?  Does it mean we scan 256
> bytes at a time?  256 pages?  256 hugepages?  Something else?

256 hugepages. We scan through physical memory 2MB at a time with the
alignment at 2MB. For the moment, we have observed that scanning 256 PFNs per second
with a kernel thread does not produce any noticeable side effects on x86_64.
> 
> How can any constant make sense when different architectures have
> different [huge]page sizes?  Should it be scaled by pagesize?  And if
> we're going to do that, we should scale it by CPU speed at the same time.
> 
This sounds very interesting. I would be happy to hear any suggestions for how 
we can scale this value more systematically. The way we decided on this value
initially was as a value that should be small enough that there are no noticeable
Side effects, and then we could tune it later. 

> Or bypass all of that and simply scan for a certain amount of *time*,
> rather than scan a certain amount of memory.  After all, chunking up
> the scan time is what we're trying to achieve by chunking up the scan
> amount.  Why not chunk up the scan time directly?
> 
> See where I'm going?  I see many hidden assumptions, design decisions
> and tradeoffs here.  Can we please attempt to spell them out and review
> them.

I will send out an RFC patchset including this one, split_huge_page, and the shrinker
later this week. Will add more description and include any suggestions in that. Thanks!
> 
> Anyway.  There were many review comments on previous versions.  It
> would have been better had those reviewers been cc'ed on this version. 
> I'll go into hiding and see what people think.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ