[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170724125015.GJ25221@dhcp22.suse.cz>
Date: Mon, 24 Jul 2017 14:50:15 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Yang Shi <yang.shi@...aro.org>,
Laura Abbott <labbott@...hat.com>,
Vinayak Menon <vinmenon@...eaurora.org>,
zhong jiang <zhongjiang@...wei.com>
Subject: Re: [PATCH 3/4] mm, page_owner: don't grab zone->lock for
init_pages_in_zone()
On Thu 20-07-17 15:40:28, Vlastimil Babka wrote:
> init_pages_in_zone() is run under zone->lock, which means a long lock time and
> disabled interrupts on large machines. This is currently not an issue since it
> runs early in boot, but a later patch will change that.
> However, like other pfn scanners, we don't actually need zone->lock even when
> other cpus are running. The only potentially dangerous operation here is
> reading bogus buddy page owner due to race, and we already know how to handle
> that. The worse that can happen is that we skip some early allocated pages,
> which should not affect the debugging power of page_owner noticeably.
Makes sense to me
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/page_owner.c | 16 ++++++++++------
> 1 file changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 5aa21ca237d9..cf6568d1dc14 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -567,11 +567,17 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
> continue;
>
> /*
> - * We are safe to check buddy flag and order, because
> - * this is init stage and only single thread runs.
> + * To avoid having to grab zone->lock, be a little
> + * careful when reading buddy page order. The only
> + * danger is that we skip too much and potentially miss
> + * some early allocated pages, which is better than
> + * heavy lock contention.
> */
> if (PageBuddy(page)) {
> - pfn += (1UL << page_order(page)) - 1;
> + unsigned long order = page_order_unsafe(page);
> +
> + if (order > 0 && order < MAX_ORDER)
> + pfn += (1UL << order) - 1;
> continue;
> }
>
> @@ -590,6 +596,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
> __set_page_owner_init(page_ext, init_handle);
> count++;
> }
> + cond_resched();
> }
>
> pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n",
> @@ -600,15 +607,12 @@ static void init_zones_in_node(pg_data_t *pgdat)
> {
> struct zone *zone;
> struct zone *node_zones = pgdat->node_zones;
> - unsigned long flags;
>
> for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) {
> if (!populated_zone(zone))
> continue;
>
> - spin_lock_irqsave(&zone->lock, flags);
> init_pages_in_zone(pgdat, zone);
> - spin_unlock_irqrestore(&zone->lock, flags);
> }
> }
>
> --
> 2.13.2
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists