lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7fc13837-546c-9c4a-1456-753df199e171@redhat.com>
Date:   Mon, 7 Oct 2019 12:19:13 -0400
From:   Nitesh Narayan Lal <nitesh@...hat.com>
To:     Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-mm <linux-mm@...ck.org>
Cc:     Alexander Duyck <alexander.duyck@...il.com>,
        David Hildenbrand <david@...hat.com>,
        virtio-dev@...ts.oasis-open.org, kvm list <kvm@...r.kernel.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Hocko <mhocko@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        Oscar Salvador <osalvador@...e.de>,
        Yang Zhang <yang.zhang.wz@...il.com>,
        Pankaj Gupta <pagupta@...hat.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
        Rik van Riel <riel@...riel.com>, lcapitulino@...hat.com,
        "Wang, Wei W" <wei.w.wang@...el.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page
 reporting


On 10/7/19 11:33 AM, Alexander Duyck wrote:
> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
>> On 10/2/19 10:25 AM, Alexander Duyck wrote:
>>
[...]
>> You  don't have to, I can fix the issues in my patch-set. :)
>>> Sounds good. Hopefully the stuff I pointed out above helps you to get
>>> a reproduction and resolve the issues.
>> So I did observe a significant drop in running my v12 path-set [1] with the
>> suggested test setup. However, on making certain changes the performance
>> improved significantly.
>>
>> I used my v12 patch-set which I have posted earlier and made the following
>> changes:
>> 1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
>>     pages that can be reported at a time to 32 from 16. The intent of making
>>     these changes was to bring my configuration closer to what Alexander is
>>     using.
> The increase from 16 to 32 is valid. No point in working in too small of
> batches. However tightening the order to only test for MAX_ORDER - 1 seems
> like a step in the wrong direction. The bitmap approach doesn't have much
> value if it can only work with the highest order page. I realize it is
> probably necessary in order to make the trick for checking on page_buddy
> work, but it seems very limiting.

If using (pageblock_order - 1) is a better way to do this, then I can probably
switch to that.
I will agree with the fact that we have to make the reporting order
configurable, atleast to an extent.

>
>> 2. I made an additional change in my bitmap scanning logic to prevent acquiring
>>     spinlock if the page is already allocated.
> Again, not a fan. It basically means you can only work with MAX_ORDER - 1
> and there will be no ability to work with anything smaller.
>
>> Setup:
>> On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the
>> modified will-it-scale/page_fault number of times and calculated the average
>> of the number of process and threads launched on the 16th core to compare the
>> impact of my patch-set against an unmodified kernel.
>>
>>
>> Conclusion:
>> %Drop in number of processes launched on 16th vCPU =     1-2%
>> %Drop in number of threads launched on 16th vCPU     =     5-6%
> These numbers don't make that much sense to me. Are you talking about a
> fully functioning setup that is madvsing away the memory in the
> hypervisor?


Without making this change I was observing a significant amount of drop
in the number of processes and specifically in the number of threads.
I did a double-check of the configuration which I have shared.
I was also observing the "AnonHugePages" via meminfo to check the THP usage.
Any more suggestions about what else I can do to verify?
I will be more than happy to try them out.

>  If so I would have expected a much higher difference versus
> baseline as zeroing/faulting the pages in the host gets expensive fairly
> quick. What is the host kernel you are running your test on? I'm just
> wondering if there is some additional overhead currently limiting your
> setup. My host kernel was just the same kernel I was running in the guest,
> just built without the patches applied.

Right now I have a different host-kernel. I can install the same kernel to the
host as well and see if that changes anything.

>
>> Other observations:
>> - I also tried running Alexander's latest v11 page-reporting patch set and
>>   observe a similar amount of average degradation in the number of processes
>>   and threads.
>> - I didn't include the linear component recorded by will-it-scale because for
>>   some reason it was fluctuating too much even when I was using an unmodified
>>   kernel. If required I can investigate this further.
>>
>> Note: If there is a better way to analyze the will-it-scale/page_fault results
>> then please do let me know.
> Honestly I have mostly just focused on the processes performance.

In my observation processes seems to be most consistent in general.

>  There is
> usually a fair bit of variability but a pattern forms after a few runs so
> you can generally tell if a configuration is an improvement or not.

Yeah, that's why I thought of taking the average of 5-6 runs.

>
>> Other setup details:
>> Following are the configurations which I enabled to run my tests:
>> - Enabled: CONFIG_SLAB_FREELIST_RANDOM & CONFIG_SHUFFLE_PAGE_ALLOCATOR
>> - Set host THP to always
>> - Set guest THP to madvise
>> - Added the suggested madvise call in page_fault source code.
>> @Alexander please let me know if I missed something.
> This seems about right.
>
>> The current state of my v13:
>> I still have to look into Michal's suggestion of using page-isolation API's
>> instead of isolating the page. However, I believe at this moment our objective
>> is to decide with which approach we can proceed and that's why I decided to
>> post the numbers by making small required changes in v12 instead of posting a
>> new series.
>>
>>
>> Following are the changes which I have made on top of my v12:
>>
>> page_reporting.h change:
>> -#define PAGE_REPORTING_MIN_ORDER               (MAX_ORDER - 2)
>> -#define PAGE_REPORTING_MAX_PAGES               16
>> +#define PAGE_REPORTING_MIN_ORDER              (MAX_ORDER - 1)
>> +#define PAGE_REPORTING_MAX_PAGES              32
>>
>> page_reporting.c change:
>> @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct page_reporting_config
>> *phconf,
>>                 /* Process only if the page is still online */
>>                 page = pfn_to_online_page((setbit << PAGE_REPORTING_MIN_ORDER) +
>>                                           zone->base_pfn);
>> -               if (!page)
>> +               if (!page || !PageBuddy(page)) {
>> +                       clear_bit(setbit, zone->bitmap);
>> +                       atomic_dec(&zone->free_pages);
>>                         continue;
>> +               }
>>
> I suspect the zone->free_pages is going to be expensive for you to deal
> with. It is a global atomic value and is going to have the cacheline
> bouncing that it is contained in. As a result thinks like setting the
> bitmap with be more expensive as every tome a CPU increments free_pages it
> will likely have to take the cache line containing the bitmap pointer as
> well.

I see I will have to explore this more. I am wondering if there is a way to
measure this If its effect is not visible in will-it-scale/page_fault1. If
there is a noticeable amount of degradation, I will have to address this.

>
>> @Alexander in case you decide to give it a try and find different results,
>> please do let me know.
>>
>> [1] https://lore.kernel.org/lkml/20190812131235.27244-1-nitesh@redhat.com/
>>
>>
> If I have some free time I will take a look.

That would be great, thanks.

>  However one thing that
> concerns me about this change is that it will limit things much further in
> terms of how much memory can ultimately be freed since you are now only
> working with the highest order page and that becomes a hard requirement
> for your design.

I would assume that should be resolved with (pageblock_order - 1).

>
-- 
Nitesh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ