[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a88pgwv0.fsf@vitty.brq.redhat.com>
Date: Mon, 13 Mar 2017 13:54:59 +0100
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Igor Mammedov <imammedo@...hat.com>,
Heiko Carstens <heiko.carstens@...ibm.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Greg KH <gregkh@...uxfoundation.org>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
David Rientjes <rientjes@...gle.com>,
Daniel Kiper <daniel.kiper@...cle.com>,
linux-api@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
linux-s390@...r.kernel.org, xen-devel@...ts.xenproject.org,
linux-acpi@...r.kernel.org, qiuxishi@...wei.com,
toshi.kani@....com, xieyisheng1@...wei.com, slaoub@...il.com,
iamjoonsoo.kim@....com, vbabka@...e.cz
Subject: Re: [RFC PATCH] mm, hotplug: get rid of auto_online_blocks
Michal Hocko <mhocko@...nel.org> writes:
> On Mon 13-03-17 11:55:54, Igor Mammedov wrote:
>> > >
>> > > - suggested RFC is not acceptable from virt point of view
>> > > as it regresses guests on top of x86 kvm/vmware which
>> > > both use ACPI based memory hotplug.
>> > >
>> > > - udev/userspace solution doesn't work in practice as it's
>> > > too slow and unreliable when system is under load which
>> > > is quite common in virt usecase. That's why auto online
>> > > has been introduced in the first place.
>> >
>> > Please try to be more specific why "too slow" is a problem. Also how
>> > much slower are we talking about?
>>
>> In virt case on host with lots VMs, userspace handler
>> processing could be scheduled late enough to trigger a race
>> between (guest memory going away/OOM handler) and memory
>> coming online.
>
> Either you are mixing two things together or this doesn't really make
> much sense. So is this a balloning based on memory hotplug (aka active
> memory hotadd initiated between guest and host automatically) or a guest
> asking for additional memory by other means (pay more for memory etc.)?
> Because if this is an administrative operation then I seriously question
> this reasoning.
I'm probably repeating myself but it seems this point was lost:
This is not really a 'ballooning', it is just a pure memory
hotplug. People may have any tools monitoring their VM memory usage and
when a VM is running low on memory they may want to hotplug more memory
to it. With udev-style memory onlining they should be aware of page
tables and other in-kernel structures which require allocation so they
need to add memory slowly and gradually or they risk running into OOM
(at least getting some processes killed and these processes may be
important). With in-kernel memory hotplug everything happens
synchronously and no 'slowly and gradually' algorithm is required in
all tools which may trigger memory hotplug.
It's not about slowness, it's about being synchronous or
asynchronous. This is not related to the virtualization technology used,
the use-case is the same for all of them which support memory hotplug.
--
Vitaly
Powered by blists - more mailing lists