lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <edae2736-3239-0bdc-499c-560fc234c974@redhat.com>
Date:   Fri, 28 Feb 2020 09:22:56 +0100
From:   David Hildenbrand <david@...hat.com>
To:     "Huang, Ying" <ying.huang@...el.com>,
        Matthew Wilcox <willy@...radead.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Mel Gorman <mgorman@...e.de>,
        Vlastimil Babka <vbabka@...e.cz>, Zi Yan <ziy@...dia.com>,
        Michal Hocko <mhocko@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Minchan Kim <minchan@...nel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Hugh Dickins <hughd@...gle.com>,
        Alexander Duyck <alexander.duyck@...il.com>
Subject: Re: [RFC 0/3] mm: Discard lazily freed pages when migrating

On 28.02.20 08:25, Huang, Ying wrote:
> Hi, Matthew,
> 
> Matthew Wilcox <willy@...radead.org> writes:
> 
>> On Fri, Feb 28, 2020 at 11:38:16AM +0800, Huang, Ying wrote:
>>> MADV_FREE is a lazy free mechanism in Linux.  According to the manpage
>>> of mavise(2), the semantics of MADV_FREE is,
>>>
>>>   The application no longer requires the pages in the range specified
>>>   by addr and len.  The kernel can thus free these pages, but the
>>>   freeing could be delayed until memory pressure occurs. ...
>>>
>>> Originally, the pages freed lazily by MADV_FREE will only be freed
>>> really by page reclaiming when there is memory pressure or when
>>> unmapping the address range.  In addition to that, there's another
>>> opportunity to free these pages really, when we try to migrate them.
>>>
>>> The main value to do that is to avoid to create the new memory
>>> pressure immediately if possible.  Instead, even if the pages are
>>> required again, they will be allocated gradually on demand.  That is,
>>> the memory will be allocated lazily when necessary.  This follows the
>>> common philosophy in the Linux kernel, allocate resources lazily on
>>> demand.
>>
>> Do you have an example program which does this (and so benefits)?
> 
> Sorry, what do you mean exactly for "this" here?  Call
> madvise(,,MADV_FREE)?  Or migrate pages?
> 
>> If so, can you quantify the benefit at all?
> 
> The question is what is the right workload?  For example, I can build a
> scenario as below to show benefit.

We usually don't optimize for theoretical issues. Is there a real-life
workload you are trying to optimize this code for?

> 
> - run program A in node 0 with many lazily freed pages
> 
> - run program B in node 1, so that the free memory on node 1 is low
> 
> - migrate the program A from node 0 to node 1, so that the program B is
>   influenced by the memory pressure created by migrating lazily freed
>   pages.
> 

E.g., free page reporting in QEMU wants to use MADV_FREE. The guest will
report currently free pages to the hypervisor, which will MADV_FREE the
reported memory. As long as there is no memory pressure, there is no
need to actually free the pages. Once the guest reuses such a page, it
could happen that there is still the old page and pulling in in a fresh
(zeroed) page can be avoided.

AFAIKs, after your change, we would get more pages discarded from our
guest, resulting in more fresh (zeroed) pages having to be pulled in
when a guest touches a reported free page again. But OTOH, page
migration is speed up (avoiding to migrate these pages).

However, one important question, will you always discard memory when
migrating pages, or only if there is memory pressure on the migration
target?

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ