[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9cd0dcde-f257-1b94-17d0-f2e24a3ce979@intel.com>
Date: Fri, 16 Apr 2021 07:26:43 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Michal Hocko <mhocko@...e.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
yang.shi@...ux.alibaba.com, rientjes@...gle.com,
ying.huang@...el.com, dan.j.williams@...el.com, david@...hat.com,
osalvador@...e.de, weixugc@...gle.com
Subject: Re: [PATCH 00/10] [v7][RESEND] Migrate Pages in lieu of discard
On 4/16/21 5:35 AM, Michal Hocko wrote:
> I have to confess that I haven't grasped the initialization
> completely. There is a nice comment explaining a 2 socket system with
> 3 different NUMA nodes attached to it with one node being terminal.
> This is OK if the terminal node is PMEM but how that fits into usual
> NUMA setups. E.g.
> 4 nodes each with its set of CPUs
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
> Do I get it right that Node 3 would be terminal?
Yes, I think Node 3 would end up being the terminal node in that setup.
That said, I'm not sure how much I expect folks to use this on
traditional, non-tiered setups. It's also hard to argue what the
migration order *should* be when all the nodes are uniform.
> - The demotion is controlled by node_reclaim_mode but unlike other modes
> it applies to both direct and kswapd reclaims.
> I do not see that explained anywhere though.
That's an interesting observation. Let me do a bit of research and I'll
update the Documentation/ and the changelog.
> - The demotion is implemented at shrink_page_list level which migrates
> pages in the first round and then falls back to the regular reclaim
> when migration fails. This means that the reclaim context
> (PF_MEMALLOC) will allocate memory so it has access to full memory
> reserves. Btw. I do not __GFP_NO_MEMALLOC anywhere in the allocation
> mask which looks like a bug rather than an intention. Btw. using
> GFP_NOWAIT in the allocation callback would make more things clear
> IMO.
Yes, the lack of __GFP_NO_MEMALLOC is a bug. I'll fix that up.
GFP_NOWAIT _seems_ like it will work. I'll give it a shot.
> - Memcg reclaim is excluded from all this because it is not NUMA aware
> which makes sense to me.
> - Anonymous pages are bit tricky because they can be demoted even when
> they cannot be reclaimed due to no (or no available) swap storage.
> Unless I have missed something the second round will try to reclaim
> them even the later is true and I am not sure this is completely OK.
What we want is something like this:
Swap Space / Demotion OK -> Can Reclaim
Swap Space / Demotion Off -> Can Reclaim
Swap Full / Demotion OK -> Can Reclaim
Swap Full / Demotion Off -> No Reclaim
I *think* that's what can_reclaim_anon_pages() ends up doing. Maybe I'm
misunderstanding what you are referring to, though. By "second round"
did you mean when we do reclaim on a node which is a terminal node?
> I am still trying to digest the whole thing but at least jamming
> node_reclaim logic into kswapd seems strange to me. Need to think more
> about that though.
I'm entirely open to other ways to do the opt-in. It seemed sane at the
time, but I also understand the kswapd concern.
> Btw. do you have any numbers from running this with some real work
> workload?
Yes, quite a bit. Do you have a specific scenario in mind? Folks seem
to come at this in two different ways:
Some want to know how much DRAM they can replace by buying some PMEM.
They tend to care about how much adding the (cheaper) PMEM slows them
down versus (expensive) DRAM. They're making a cost-benefit call
Others want to repurpose some PMEM they already have. They want to know
how much using PMEM in this way will speed them up. They will basically
take any speedup they can get.
I ask because as a kernel developer with PMEM in my systems, I find the
"I'll take what I can get" case more personally appealing. But, the
business folks are much more keen on the "DRAM replacement" use. Do you
have any thoughts on what you would like to see?
Powered by blists - more mailing lists