lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKgT0Uc9wt2SZ6hPK+y135sL-Hpvy5==NP2Uq6DCNP1BZ927Cg@mail.gmail.com>
Date:   Mon, 21 Jun 2021 06:43:48 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Gavin Shan <gshan@...hat.com>
Cc:     David Hildenbrand <david@...hat.com>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        shan.gavin@...il.com, Anshuman Khandual <anshuman.khandual@....com>
Subject: Re: [RFC PATCH] mm/page_reporting: Adjust threshold according to MAX_ORDER

On Sun, Jun 20, 2021 at 10:51 PM Gavin Shan <gshan@...hat.com> wrote:
>
> On 6/17/21 12:15 AM, Alexander Duyck wrote:
> > On Wed, Jun 16, 2021 at 12:10 AM Gavin Shan <gshan@...hat.com> wrote:
> >> On 6/15/21 12:26 PM, Alexander Duyck wrote:
> >>> On Mon, Jun 14, 2021 at 4:03 AM David Hildenbrand <david@...hat.com> wrote:
> >>>> On 11.06.21 09:44, Gavin Shan wrote:
> >>>>> On 6/1/21 6:01 PM, David Hildenbrand wrote:
> >>>>>> On 01.06.21 05:33, Gavin Shan wrote:
>
> [...]
>
> >>>
> >>> Yes, generally reporting pages comes at a fairly high cost so it is
> >>> important to find the right trade-off between the size of the page and
> >>> the size of the batch of pages being reported. If the size of the
> >>> pages is reduced it maybe important to increase the batch size in
> >>> order to avoid paying too much in the way of overhead.
> >>>
> >>> The other main reason for holding to pageblock_order on x86 is to
> >>> avoid THP splitting. Anything smaller than pageblock_order will
> >>> trigger THP splitting which will significantly hurt the performance of
> >>> the VM in general as it forces it down to order 0 pages.
> >>>
> >>
> >> Alex, Thanks for your reply and sorry for taking your time to this
> >> discussion.
> >>
> >> Could you please confirm it's PAGE_REPORTING_CAPACITY or the budget
> >> used in page_reporting_cycle() when you're talking about "batch"?
> >
> > Yes, when I refer to batch it is how many pages we are processing in a
> > single call. That is limited by PAGE_REPORTING_CAPACITY.
> >
>
> Alex, It seems the batch mechanism is to avoid heavy contention on
> zone's lock if I'm correct? The current design is to report all pages
> in the corresponding free list within 17 calls to page_reporting_cycle().
> Could you please explain why 17 was chosen? :)
>
>     budget = DIV_ROUND_UP(area->nr_free, PAGE_REPORTING_CAPACITY * 16);

It isn't that 17 was chosen. The idea was to only process 1/16th of
the free list at a time. The general idea is that by doing that and
limiting the page reporting to an interval of once every 2 seconds we
should have the entire guest reported out after about 30 seconds
assuming it is idle. If it isn't idle then the overhead for reporting
only 1/16th of the guest memory should be fairly low.

> It's related to the magic number ("16"). With the threshold is decreased,
> for example from 512MB to 2MB on arm64 with 64KB base page size, more
> page reporting activities will be introduced. From this regard, it's
> reasonable to increase the magic number as well, so that more calls
> to page_reporting_cycle() to avoid the contention to zone's lock.
>
> If you agree, I will come up with something, similar to what we do for
> the threshold. However, I'm not sure if 64 is reasonable cycles to have
> for this particular case.
>
>     in arch/arm64/include/asm/page.h
>        #ifdef CONFIG_ARM64_64K_PAGES
>        #define PAGE_REPORTING_ORDER    5
>        #define PAGE_REPORTING_CYCLES   64
>        #endif

You mentioned going from 512MB to 2MB pages. What is the MAX_ORDER for
the arm architecture you are working with? One concern I have is that
order 5 pages may not be high enough order to keep the page reporting
from interfering with the guest memory allocations since you are
having to cover so many free areas.

Ideally with page reporting we were aiming for MAX_ORDER and MAX_ORDER
- 1 as being the main targets for page reporting. The advantage there
is that on x86 that also allowed us to avoid splitting THP pages. The
other advantage is that when combined with the 16 and the fact that we
were rounding up the budget it should come out to about one minute to
fully flush out all the memory on an idle guest.

If anything we would want to take the (MAX_ORDER -
PAGE_REPORTING_ORDER)/2 and use that as a multiple for the 16 value as
that would give us the upper limit on how long it should take to
report all of the pages in a given block. It gets us to the same
value, but does a better job of explaining why.

>     in mm/page_reporting.h
>        #ifndef PAGE_REPORTING_CYCLES
>        #define PAGE_REPORTING_CYCLES   16
>        #endif
>     in mm/page_reporting.c::page_reporting_cycle()
>        budget = DIV_ROUND_UP(area->nr_free,
>                              PAGE_REPORTING_CAPACITY * PAGE_REPORTING_CYCLES);
>
> Thanks,
> Gavin

The 16 isn't about cycles, it is about how fast we want to leak the
memory out of the guest. You don't want this to go too fast otherwise
you are going to be fighting with anything that is trying to allocate
memory. In theory you should only be reporting pages in the top tiers
of the memory hierarchy so that it is not very active.

One way to think about page reporting is as a leaky bucket approach.
After a minute or so you want the bucket to drain assuming the memory
allocations/frees for a guest have become inactive. However it the VM
is active you want it to do very little in way of page reporting so
that you are not having to fault back in a ton of memory.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ