[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170911081714.4zc33r7wlj2nnbho@dhcp22.suse.cz>
Date: Mon, 11 Sep 2017 10:17:14 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Reza Arbab <arbab@...ux.vnet.ibm.com>,
Yasuaki Ishimatsu <yasu.isimatu@...il.com>,
qiuxishi@...wei.com, Igor Mammedov <imammedo@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] mm, memory_hotplug: do not fail offlining too early
On Fri 08-09-17 19:26:06, Vlastimil Babka wrote:
> On 09/04/2017 10:21 AM, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@...e.com>
> >
> > Memory offlining can fail just too eagerly under a heavy memory pressure.
> >
> > [ 5410.336792] page:ffffea22a646bd00 count:255 mapcount:252 mapping:ffff88ff926c9f38 index:0x3
> > [ 5410.336809] flags: 0x9855fe40010048(uptodate|active|mappedtodisk)
> > [ 5410.336811] page dumped because: isolation failed
> > [ 5410.336813] page->mem_cgroup:ffff8801cd662000
> > [ 5420.655030] memory offlining [mem 0x18b580000000-0x18b5ffffffff] failed
> >
> > Isolation has failed here because the page is not on LRU. Most probably
> > because it was on the pcp LRU cache or it has been removed from the LRU
> > already but it hasn't been freed yet. In both cases the page doesn't look
> > non-migrable so retrying more makes sense.
> >
> > __offline_pages seems rather cluttered when it comes to the retry
> > logic. We have 5 retries at maximum and a timeout. We could argue
> > whether the timeout makes sense but failing just because of a race when
> > somebody isoltes a page from LRU or puts it on a pcp LRU lists is just
> > wrong. It only takes it to race with a process which unmaps some pages
> > and remove them from the LRU list and we can fail the whole offline
> > because of something that is a temporary condition and actually not
> > harmful for the offline. Please note that unmovable pages should be
> > already excluded during start_isolate_page_range.
>
> Hmm, the has_unmovable_pages() check doesn't offer any strict guarantees due to
> races, per its comment. Also at the very quick glance, I see a check where it
> assumes that MIGRATE_MOVABLE pageblock will have no unmovable pages. There is no
> such guarantee even without races.
Yes, you are right that there are races possible but practically
speaking non-movable memblocks (in !MOVABLE_ZONE) would be very likely
to have reliably unmovable pages and so has_unmovable_pages would bail
out. And ZONE_MOVABLE memblocks with permanently pinned pages sound like
a bug to me.
> > Fix this by removing the max retry count and only rely on the timeout
> > resp. interruption by a signal from the userspace. Also retry rather
> > than fail when check_pages_isolated sees some !free pages because those
> > could be a result of the race as well.
> >
> > Signed-off-by: Michal Hocko <mhocko@...e.com>
>
> Even within a movable node where has_unmovable_pages() is a non-issue, you could
> have pinned movable pages where the pinning is not temporary.
Who would pin those pages? Such a page would be unreclaimable as well
and thus a memory leak and I would argue it would be a bug.
> So after this
> patch, this will really keep retrying forever. I'm not saying it's wrong, just
> pointing it out, since the changelog seems to assume there would be only
> temporary failures possible and thus unbound retries are always correct.
> The obvious problem if we wanted to avoid this, is how to recognize
> non-temporary failures...
Yes, we should be able to distinguish the two and hopefully we can teach
the migration code to distinguish between EBUSY (likely permanent) and
EGAIN (temporal) failure. This sound like something we should aim for
longterm I guess. Anyway as I've said in other email. If somebody really
wants to have a guaratee of a bounded retry then it is trivial to set up
an alarm and send a signal itself to bail out.
Do you think that the changelog should be more clear about this?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists