[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1630552995.2mupnzoqzs.astroid@bobo.none>
Date: Thu, 02 Sep 2021 13:25:36 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Shijie Huang <shijie@...eremail.onmicrosoft.com>,
Matthew Wilcox <willy@...radead.org>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, song.bao.hua@...ilicon.com,
torvalds@...ux-foundation.org, viro@...iv.linux.org.uk,
Frank Wang <zwang@...erecomputing.com>
Subject: Re: Is it possible to implement the per-node page cache for
programs/libraries?
Excerpts from Matthew Wilcox's message of September 1, 2021 1:25 pm:
> On Wed, Sep 01, 2021 at 11:07:41AM +0800, Shijie Huang wrote:
>> In the NUMA, we only have one page cache for each file. For the
>> program/shared libraries, the
>> remote-access delays longer then the local-access.
>>
>> So, is it possible to implement the per-node page cache for
>> programs/libraries?
>
> At this point, we have no way to support text replication within a
> process. So what you're suggesting (if implemented) would work for
> processes which limit themselves to a single node. That is, if you
> have a system with CPUs 0-3 on node 0 and CPUs 4-7 on node 1, a process
> which only works on node 0 or only works on node 1 will get text on the
> appropriate node.
>
> If there's a process which runs on both nodes 0 and 1, there's no support
> for per-node PGDs. So it will get a mix of pages from nodes 0 and 1,
> and that doesn't necessarily seem like a big win. I haven't yet dived
> into how hard it would be to make mm->pgd a per-node allocation.
>
> I have been thinking about this a bit; one of our internal performance
> teams flagged the potential performance win to me a few months ago.
> I don't have a concrete design for text replication yet; there have been
> various attempts over the years, but none were particularly compelling.
What was not compelling about it?
https://lists.openwall.net/linux-kernel/2007/07/27/112
What are the other attempts?
Thanks,
Nick
Powered by blists - more mailing lists