[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200924015700.GA3145@L-31X9LVDL-1304.local>
Date: Thu, 24 Sep 2020 09:57:00 +0800
From: Wei Yang <richard.weiyang@...ux.alibaba.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: David Hildenbrand <david@...hat.com>, osalvador@...e.de,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-hyperv@...r.kernel.org, xen-devel@...ts.xenproject.org,
linux-acpi@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Michael Ellerman <mpe@...erman.id.au>,
Michal Hocko <mhocko@...nel.org>,
Mike Rapoport <rppt@...nel.org>,
Scott Cheloha <cheloha@...ux.ibm.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>,
Wei Yang <richard.weiyang@...ux.alibaba.com>
Subject: Re: [PATCH RFC 0/4] mm: place pages to the freelist tail when onling
and undoing isolation
On Wed, Sep 23, 2020 at 04:31:25PM +0200, Vlastimil Babka wrote:
>On 9/16/20 9:31 PM, David Hildenbrand wrote:
>>
>>
>>> Am 16.09.2020 um 20:50 schrieb osalvador@...e.de:
>>>
>>> On 2020-09-16 20:34, David Hildenbrand wrote:
>>>> When adding separate memory blocks via add_memory*() and onlining them
>>>> immediately, the metadata (especially the memmap) of the next block will be
>>>> placed onto one of the just added+onlined block. This creates a chain
>>>> of unmovable allocations: If the last memory block cannot get
>>>> offlined+removed() so will all dependant ones. We directly have unmovable
>>>> allocations all over the place.
>>>> This can be observed quite easily using virtio-mem, however, it can also
>>>> be observed when using DIMMs. The freshly onlined pages will usually be
>>>> placed to the head of the freelists, meaning they will be allocated next,
>>>> turning the just-added memory usually immediately un-removable. The
>>>> fresh pages are cold, prefering to allocate others (that might be hot)
>>>> also feels to be the natural thing to do.
>>>> It also applies to the hyper-v balloon xen-balloon, and ppc64 dlpar: when
>>>> adding separate, successive memory blocks, each memory block will have
>>>> unmovable allocations on them - for example gigantic pages will fail to
>>>> allocate.
>>>> While the ZONE_NORMAL doesn't provide any guarantees that memory can get
>>>> offlined+removed again (any kind of fragmentation with unmovable
>>>> allocations is possible), there are many scenarios (hotplugging a lot of
>>>> memory, running workload, hotunplug some memory/as much as possible) where
>>>> we can offline+remove quite a lot with this patchset.
>>>
>>> Hi David,
>>>
>>
>> Hi Oscar.
>>
>>> I did not read through the patchset yet, so sorry if the question is nonsense, but is this not trying to fix the same issue the vmemmap patches did? [1]
>>
>> Not nonesense at all. It only helps to some degree, though. It solves the dependencies due to the memmap. However, it‘s not completely ideal, especially for single memory blocks.
>>
>> With single memory blocks (virtio-mem, xen-balloon, hv balloon, ppc dlpar) you still have unmovable (vmemmap chunks) all over the physical address space. Consider the gigantic page example after hotplug. You directly fragmented all hotplugged memory.
>>
>> Of course, there might be (less extreme) dependencies due page tables for the identity mapping, extended struct pages and similar.
>>
>> Having that said, there are other benefits when preferring other memory over just hotplugged memory. Think about adding+onlining memory during boot (dimms under QEMU, virtio-mem), once the system is up you will have most (all) of that memory completely untouched.
>>
>> So while vmemmap on hotplugged memory would tackle some part of the issue, there are cases where this approach is better, and there are even benefits when combining both.
>
>I see the point, but I don't think the head/tail mechanism is great for this. It
>might sort of work, but with other interfering activity there are no guarantees
>and it relies on a subtle implementation detail. There are better mechanisms
>possible I think, such as preparing a larger MIGRATE_UNMOVABLE area in the
>existing memory before we allocate those long-term management structures. Or
>onlining a bunch of blocks as zone_movable first and only later convert to
>zone_normal in a controlled way when existing normal zone becomes depeted?
>
To be honest, David's approach is easy to understand for me.
And I don't see some negative effect.
>I guess it's an issue that the e.g. 128M block onlines are so disconnected from
>each other it's hard to employ a strategy that works best for e.g. a whole bunch
>of GB onlined at once. But I noticed some effort towards new API, so maybe that
>will be solved there too?
>
>> Thanks!
>>
>> David
>>
>>>
>>> I was about to give it a new respin now that thw hwpoison stuff has been settled.
>>>
>>> [1] https://patchwork.kernel.org/cover/11059175/
>>>
>>
--
Wei Yang
Help you, Help me
Powered by blists - more mailing lists