[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e28c8ce-96e1-5a1e-bd06-d1df5856094e@linux.alibaba.com>
Date: Fri, 28 Jun 2019 11:52:46 -0700
From: Yang Shi <yang.shi@...ux.alibaba.com>
To: Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Hillf Danton <hdanton@...a.com>, Roman Gushchin <guro@...com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm, vmscan: prevent useless kswapd loops
On 6/27/19 6:55 PM, Shakeel Butt wrote:
> On production we have noticed hard lockups on large machines running
> large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred
> GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in
> isolate_lru_pages was basically skipping GiBs of pages while holding the
> LRU spinlock with interrupt disabled.
>
> On further inspection, it seems like there are two issues:
>
> 1) If the kswapd on the return from balance_pgdat() could not sleep
> (maybe all zones are still unbalanced), the classzone_idx is set to 0,
> unintentionally, and the whole reclaim cycle of kswapd will try to reclaim
> only the lowest and smallest zone while traversing the whole memory.
>
> 2) Fundamentally isolate_lru_pages() is really bad when the allocation
> has woken kswapd for a smaller zone on a very large machine running very
> large jobs. It can hoard the LRU spinlock while skipping over 100s of
> GiBs of pages.
>
> This patch only fixes the (1). The (2) needs a more fundamental solution.
>
> Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
> due to mismatched classzone_idx")
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> ---
> mm/vmscan.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9e3292ee5c7c..786dacfdfe29 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3908,7 +3908,7 @@ static int kswapd(void *p)
>
> /* Read the new order and classzone_idx */
> alloc_order = reclaim_order = pgdat->kswapd_order;
> - classzone_idx = kswapd_classzone_idx(pgdat, 0);
> + classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx);
I'm a little bit confused by the fix. What happen if kswapd is waken for
a lower zone? It looks kswapd may just reclaim the higher zone instead
of the lower zone?
For example, after bootup, classzone_idx should be (MAX_NR_ZONES - 1),
if GFP_DMA is used for allocation and kswapd is waken up for ZONE_DMA,
kswapd_classzone_idx would still return (MAX_NR_ZONES - 1) since
kswapd_classzone_idx(pgdat, classzone_idx) returns the max classzone_idx.
> pgdat->kswapd_order = 0;
> pgdat->kswapd_classzone_idx = MAX_NR_ZONES;
>
Powered by blists - more mailing lists