[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6bcd7066-2748-8a96-4479-f85b18765948@suse.cz>
Date: Fri, 7 Oct 2016 14:44:15 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Mel Gorman <mgorman@...hsingularity.net>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Sangseok Lee <sangseok.lee@....com>
Subject: Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
On 10/07/2016 07:45 AM, Minchan Kim wrote:
> There is race between page freeing and unreserved highatomic.
>
> CPU 0 CPU 1
>
> free_hot_cold_page
> mt = get_pfnblock_migratetype
so here mt == MIGRATE_HIGHATOMIC?
> set_pcppage_migratetype(page, mt)
> unreserve_highatomic_pageblock
> spin_lock_irqsave(&zone->lock)
> move_freepages_block
> set_pageblock_migratetype(page)
> spin_unlock_irqrestore(&zone->lock)
> free_pcppages_bulk
> __free_one_page(mt) <- mt is stale
>
> By above race, a page on CPU 0 could go non-highorderatomic free list
> since the pageblock's type is changed.
> By that, unreserve logic of
> highorderatomic can decrease reserved count on a same pageblock
> several times and then it will make mismatch between
> nr_reserved_highatomic and the number of reserved pageblock.
Hmm I see.
> So, this patch verifies whether the pageblock is highatomic or not
> and decrease the count only if the pageblock is highatomic.
Yeah I guess that's the easiest solution.
> Signed-off-by: Minchan Kim <minchan@...nel.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/page_alloc.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e7cbb3cc22fa..d110cd640264 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2133,13 +2133,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> continue;
>
> /*
> - * It should never happen but changes to locking could
> - * inadvertently allow a per-cpu drain to add pages
> - * to MIGRATE_HIGHATOMIC while unreserving so be safe
> - * and watch for underflows.
> + * In page freeing path, migratetype change is racy so
> + * we can counter several free pages in a pageblock
> + * in this loop althoug we changed the pageblock type
> + * from highatomic to ac->migratetype. So we should
> + * adjust the count once.
> */
> - zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> - zone->nr_reserved_highatomic);
> + if (get_pageblock_migratetype(page) ==
> + MIGRATE_HIGHATOMIC) {
> + /*
> + * It should never happen but changes to
> + * locking could inadvertently allow a per-cpu
> + * drain to add pages to MIGRATE_HIGHATOMIC
> + * while unreserving so be safe and watch for
> + * underflows.
> + */
> + zone->nr_reserved_highatomic -= min(
> + pageblock_nr_pages,
> + zone->nr_reserved_highatomic);
> + }
>
> /*
> * Convert to ac->migratetype and avoid the normal
>
Powered by blists - more mailing lists