[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YEJW+dzF9/BNIiqn@dhcp22.suse.cz>
Date: Fri, 5 Mar 2021 17:06:17 +0100
From: Michal Hocko <mhocko@...e.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, joaodias@...gle.com,
surenb@...gle.com, cgoldswo@...eaurora.org, willy@...radead.org,
david@...hat.com, vbabka@...e.cz, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: disable LRU pagevec during the migration
temporarily
On Wed 03-03-21 12:23:22, Minchan Kim wrote:
> On Wed, Mar 03, 2021 at 01:49:36PM +0100, Michal Hocko wrote:
> > On Tue 02-03-21 13:09:48, Minchan Kim wrote:
> > > LRU pagevec holds refcount of pages until the pagevec are drained.
> > > It could prevent migration since the refcount of the page is greater
> > > than the expection in migration logic. To mitigate the issue,
> > > callers of migrate_pages drains LRU pagevec via migrate_prep or
> > > lru_add_drain_all before migrate_pages call.
> > >
> > > However, it's not enough because pages coming into pagevec after the
> > > draining call still could stay at the pagevec so it could keep
> > > preventing page migration. Since some callers of migrate_pages have
> > > retrial logic with LRU draining, the page would migrate at next trail
> > > but it is still fragile in that it doesn't close the fundamental race
> > > between upcoming LRU pages into pagvec and migration so the migration
> > > failure could cause contiguous memory allocation failure in the end.
> > >
> > > To close the race, this patch disables lru caches(i.e, pagevec)
> > > during ongoing migration until migrate is done.
> > >
> > > Since it's really hard to reproduce, I measured how many times
> > > migrate_pages retried with force mode below debug code.
> > >
> > > int migrate_pages(struct list_head *from, new_page_t get_new_page,
> > > ..
> > > ..
> > >
> > > if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
> > > printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
> > > dump_page(page, "fail to migrate");
> > > }
> > >
> > > The test was repeating android apps launching with cma allocation
> > > in background every five seconds. Total cma allocation count was
> > > about 500 during the testing. With this patch, the dump_page count
> > > was reduced from 400 to 30.
> >
> > Have you seen any improvement on the CMA allocation success rate?
>
> Unfortunately, the cma alloc failure rate with reasonable margin
> of error is really hard to reproduce under real workload.
> That's why I measured the soft metric instead of direct cma fail
> under real workload(I don't want to make some adhoc artificial
> benchmark and keep tunes system knobs until it could show
> extremly exaggerated result to convice patch effect).
>
> Please say if you belive this work is pointless unless there is
> stable data under reproducible scenario. I am happy to drop it.
Well, I am not saying that this is pointless. In the end the resulting
change is relatively small and it provides a useful functionality for
other users (e.g. hotplug). That should be a sufficient justification.
I was asking about CMA allocation success rate because that is a much
more reasonable metric than how many times something has retried because
retries can help to increase success rate and the patch doesn't really
remove those. If you want to use number of retries as a metric then the
average allocation latency would be more meaningful.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists