[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161007143025.GB3060@bbox>
Date: Fri, 7 Oct 2016 23:30:25 +0900
From: Minchan Kim <minchan@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
Sangseok Lee <sangseok.lee@....com>
Subject: Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
On Fri, Oct 07, 2016 at 02:44:15PM +0200, Vlastimil Babka wrote:
> On 10/07/2016 07:45 AM, Minchan Kim wrote:
> >There is race between page freeing and unreserved highatomic.
> >
> > CPU 0 CPU 1
> >
> > free_hot_cold_page
> > mt = get_pfnblock_migratetype
>
> so here mt == MIGRATE_HIGHATOMIC?
Yes.
>
> > set_pcppage_migratetype(page, mt)
> > unreserve_highatomic_pageblock
> > spin_lock_irqsave(&zone->lock)
> > move_freepages_block
> > set_pageblock_migratetype(page)
> > spin_unlock_irqrestore(&zone->lock)
> > free_pcppages_bulk
> > __free_one_page(mt) <- mt is stale
> >
> >By above race, a page on CPU 0 could go non-highorderatomic free list
> >since the pageblock's type is changed.
> >By that, unreserve logic of
> >highorderatomic can decrease reserved count on a same pageblock
> >several times and then it will make mismatch between
> >nr_reserved_highatomic and the number of reserved pageblock.
>
> Hmm I see.
>
> >So, this patch verifies whether the pageblock is highatomic or not
> >and decrease the count only if the pageblock is highatomic.
>
> Yeah I guess that's the easiest solution.
>
> >Signed-off-by: Minchan Kim <minchan@...nel.org>
>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
Thanks, Vlastimil.
Powered by blists - more mailing lists