[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201510302232.FCH52626.OQJOFHSVFFOtLM@I-love.SAKURA.ne.jp>
Date: Fri, 30 Oct 2015 22:32:27 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org
Cc: hillf.zj@...baba-inc.com, linux-mm@...ck.org,
akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
mgorman@...e.de, hannes@...xchg.org, riel@...hat.com,
rientjes@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC 1/3] mm, oom: refactor oom detection
Michal Hocko wrote:
> + target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context.
Is this a dependency which I worried at
http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?
> Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress.
If yes, this feedback mechanism will help avoiding infinite wait loop.
> This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM.
What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?
(In other words, either "no __GFP_FS allocations are in-flight" or "all
__GFP_FS allocations are in-flight but are either waiting for completion
of operations which depend on !__GFP_FS allocations with a lock held or
waiting for that lock to be released".)
Don't we need to call out_of_memory() even though !__GFP_FS allocations?
> The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists