[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZUzs2YfXY3zBKIx9@casper.infradead.org>
Date: Thu, 9 Nov 2023 14:29:45 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "zhangpeng (AS)" <zhangpeng362@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
lstoakes@...il.com, hughd@...gle.com, david@...hat.com,
fengwei.yin@...el.com, vbabka@...e.cz, mgorman@...e.de,
mingo@...hat.com, riel@...hat.com, ying.huang@...el.com,
hannes@...xchg.org, Nanyong Sun <sunnanyong@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [Question]: major faults are still triggered after mlockall when
numa balancing
On Thu, Nov 09, 2023 at 03:11:41PM +0100, Peter Zijlstra wrote:
> On Thu, Nov 09, 2023 at 09:47:24PM +0800, zhangpeng (AS) wrote:
> > Is there any way to avoid such a major fault?
>
> man madvise
but from the mlockall manpage:
mlockall() locks all pages mapped into the address space of the calling
process. This includes the pages of the code, data, and stack segment,
as well as shared libraries, user space kernel data, shared memory, and
memory-mapped files. All mapped pages are guaranteed to be resident in
RAM when the call returns successfully; the pages are guaranteed to
stay in RAM until later unlocked.
https://pubs.opengroup.org/onlinepubs/9699919799/functions/mlockall.html
isn't quite so explicit, but I do think that page cache should be locked
into memory.
Powered by blists - more mailing lists