[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a864946-a316-3d9c-8780-64c6281276d1@linux.intel.com>
Date: Thu, 15 Apr 2021 15:31:46 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Michal Hocko <mhocko@...e.com>, Shakeel Butt <shakeelb@...gle.com>
Cc: Yang Shi <shy828301@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
Ying Huang <ying.huang@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
David Rientjes <rientjes@...gle.com>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v1 00/11] Manage the top tier memory in a tiered
memory
On 4/9/21 12:24 AM, Michal Hocko wrote:
> On Thu 08-04-21 13:29:08, Shakeel Butt wrote:
>> On Thu, Apr 8, 2021 at 11:01 AM Yang Shi <shy828301@...il.com> wrote:
> [...]
>>> The low priority jobs should be able to be restricted by cpuset, for
>>> example, just keep them on second tier memory nodes. Then all the
>>> above problems are gone.
>
> Yes, if the aim is to isolate some users from certain numa node then
> cpuset is a good fit but as Shakeel says this is very likely not what
> this work is aiming for.
>
>> Yes that's an extreme way to overcome the issue but we can do less
>> extreme by just (hard) limiting the top tier usage of low priority
>> jobs.
>
> Per numa node high/hard limit would help with a more fine grained control.
> The configuration would be tricky though. All low priority memcgs would
> have to be carefully configured to leave enough for your important
> processes. That includes also memory which is not accounted to any
> memcg.
> The behavior of those limits would be quite tricky for OOM situations
> as well due to a lack of NUMA aware oom killer.
>
Another downside of putting limits on individual NUMA
node is it would limit flexibility. For example two memory nodes are
similar enough in performance, that you really only care about a cgroup
not using more than a threshold of the combined capacity from the two
memory nodes. But when you put a hard limit on NUMA node, then you are
tied down to a fix allocation partition for each node. Perhaps there are
some kernel resources that are pre-allocated primarily from one node. A
cgroup may bump into the limit on the node and failed the allocation,
even when it has a lot of slack in the other node. This makes getting
the configuration right trickier.
There are some differences in opinion currently
on whether grouping memory nodes into tiers, and putting limit on
using them by cgroup is a desirable. Many people want the
management constraint placed at individual NUMA node for each cgroup, instead
of at the tier level. Will appreciate feedbacks from folks who have
insights on how such NUMA based control interface will work, so we
at least agree here in order to move forward.
Tim
Powered by blists - more mailing lists