[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86a6f2e1-8aed-00fc-fbd7-9250277b201f@linux.intel.com>
Date: Thu, 15 Apr 2021 15:25:05 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Michal Hocko <mhocko@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
Ying Huang <ying.huang@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
David Rientjes <rientjes@...gle.com>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered
memory
On 4/8/21 10:18 AM, Shakeel Butt wrote:
>
> Using v1's soft limit like behavior can potentially cause high
> priority jobs to stall to make enough space on top tier memory on
> their allocation path and I think this patchset is aiming to reduce
> that impact by making kswapd do that work. However I think the more
> concerning issue is the low priority job hogging the top tier memory.
>
> The possible ways the low priority job can hog the top tier memory are
> by allocating non-movable memory or by mlocking the memory. (Oh there
> is also pinning the memory but I don't know if there is a user api to
> pin memory?) For the mlocked memory, you need to either modify the
> reclaim code or use a different mechanism for demoting cold memory.
>
> Basically I am saying we should put the upfront control (limit) on the
> usage of top tier memory by the jobs.
>
Circling back to your comment here.
I agree that soft limit is deficient in this scenario that you
have pointed out. Eventually I was shooting for a hard limit on a
memory tier for a cgroup that's similar to the v2 memory controller
interface (see mail in the other thread). That interface should
satisfy the hard constraint you want to place on the low priority
jobs.
Tim
Powered by blists - more mailing lists