[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp>
Date: Mon, 26 Oct 2015 20:44:09 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org
Cc: rientjes@...gle.com, oleg@...hat.com,
torvalds@...ux-foundation.org, kwalker@...hat.com, cl@...ux.com,
akpm@...ux-foundation.org, hannes@...xchg.org,
vdavydov@...allels.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, skozina@...hat.com
Subject: Newbie's question: memory allocation when reclaiming memory
May I ask a newbie question? Say, there is some amount of memory pages
which can be reclaimed if they are flushed to storage. And lower layer
might issue memory allocation request in a way which won't cause reclaim
deadlock (e.g. using GFP_NOFS or GFP_NOIO) when flushing to storage,
isn't it?
What I'm worrying is a dependency that __GFP_FS allocation requests think
that there are reclaimable pages and therefore there is no need to call
out_of_memory(); and GFP_NOFS allocation requests which the __GFP_FS
allocation requests depend on (in order to flush to storage) is waiting
for GFP_NOIO allocation requests; and the GFP_NOIO allocation requests
which the GFP_NOFS allocation requests depend on (in order to flush to
storage) are waiting for memory pages to be reclaimed without calling
out_of_memory(); because gfp_to_alloc_flags() does not favor GFP_NOIO over
GFP_NOFS nor GFP_NOFS over __GFP_FS which will throttle all allocations
at the same watermark level.
How do we guarantee that GFP_NOFS/GFP_NOIO allocations make forward
progress? What mechanism guarantees that memory pages which __GFP_FS
allocation requests are waiting for are reclaimed? I assume that there
is some mechanism; otherwise we can hit silent livelock, can't we?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists