[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0910221227010.21601@chino.kir.corp.google.com>
Date: Thu, 22 Oct 2009 12:41:42 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Mel Gorman <mel@....ul.ie>
cc: Frans Pop <elendil@...net.nl>, Jiri Kosina <jkosina@...e.cz>,
Sven Geggus <lists@...hsschwanzdomain.de>,
Karol Lewandowski <karol.k.lewandowski@...il.com>,
Tobias Oetiker <tobi@...iker.ch>,
"Rafael J. Wysocki" <rjw@...k.pl>,
David Miller <davem@...emloft.net>,
Reinette Chatre <reinette.chatre@...el.com>,
Kalle Valo <kalle.valo@....fi>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mohamed Abbas <mohamed.abbas@...el.com>,
Jens Axboe <jens.axboe@...cle.com>,
"John W. Linville" <linville@...driver.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Bartlomiej Zolnierkiewicz <bzolnier@...il.com>,
Greg Kroah-Hartman <gregkh@...e.de>,
Stephan von Krawczynski <skraw@...net.com>,
Kernel Testers List <kernel-testers@...r.kernel.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
"linux-mm@...ck.org\"" <linux-mm@...ck.org>
Subject: Re: [PATCH 4/5] page allocator: Pre-emptively wake kswapd when
high-order watermarks are hit
On Thu, 22 Oct 2009, Mel Gorman wrote:
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7f2aa3e..851df40 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1596,6 +1596,17 @@ try_next_zone:
> return page;
> }
>
> +static inline
> +void wake_all_kswapd(unsigned int order, struct zonelist *zonelist,
> + enum zone_type high_zoneidx)
> +{
> + struct zoneref *z;
> + struct zone *zone;
> +
> + for_each_zone_zonelist(zone, z, zonelist, high_zoneidx)
> + wakeup_kswapd(zone, order);
> +}
> +
> static inline int
> should_alloc_retry(gfp_t gfp_mask, unsigned int order,
> unsigned long pages_reclaimed)
> @@ -1730,18 +1741,18 @@ __alloc_pages_high_priority(gfp_t gfp_mask, unsigned int order,
> congestion_wait(BLK_RW_ASYNC, HZ/50);
> } while (!page && (gfp_mask & __GFP_NOFAIL));
>
> - return page;
> -}
> -
> -static inline
> -void wake_all_kswapd(unsigned int order, struct zonelist *zonelist,
> - enum zone_type high_zoneidx)
> -{
> - struct zoneref *z;
> - struct zone *zone;
> + /*
> + * If after a high-order allocation we are now below watermarks,
> + * pre-emptively kick kswapd rather than having the next allocation
> + * fail and have to wake up kswapd, potentially failing GFP_ATOMIC
> + * allocations or entering direct reclaim
> + */
> + if (unlikely(order) && page && !zone_watermark_ok(preferred_zone, order,
> + preferred_zone->watermark[ALLOC_WMARK_LOW],
> + zone_idx(preferred_zone), ALLOC_WMARK_LOW))
> + wake_all_kswapd(order, zonelist, high_zoneidx);
>
> - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx)
> - wakeup_kswapd(zone, order);
> + return page;
> }
>
> static inline int
Hmm, is this really supposed to be added to __alloc_pages_high_priority()?
By the patch description I was expecting kswapd to be woken up
preemptively whenever the preferred zone is below ALLOC_WMARK_LOW and
we're known to have just allocated at a higher order, not just when
current was oom killed (when we should already be freeing a _lot_ of
memory soon) or is doing a higher order allocation during direct reclaim.
For the best coverage, it would have to be add the branch to the fastpath.
That seems fine for a debugging aid and to see if progress is being made
on the GFP_ATOMIC allocation issues, but doesn't seem like it should make
its way to mainline, the subsequent GFP_ATOMIC allocation could already be
happening and in the page allocator's slowpath at this point that this
wakeup becomes unnecessary.
If this is moved to the fastpath, why is this wake_all_kswapd() and not
wakeup_kswapd(preferred_zone, order)? Do we need to kick kswapd in all
zones even though they may be free just because preferred_zone is now
below the watermark?
Wouldn't it be better to do this on page_zone(page) instead of
preferred_zone anyway?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists