[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod7pbp0fFUPRnC68qdzkCEUg2YTavq6C6OLxqooCU5VeyQ@mail.gmail.com>
Date: Tue, 19 Dec 2017 10:25:12 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: Tejun Heo <tj@...nel.org>
Cc: Michal Hocko <mhocko@...nel.org>, Li Zefan <lizefan@...wei.com>,
Roman Gushchin <guro@...com>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Greg Thelen <gthelen@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>, linux-doc@...r.kernel.org
Subject: Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2
On Tue, Dec 19, 2017 at 9:33 AM, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Tue, Dec 19, 2017 at 09:23:29AM -0800, Shakeel Butt wrote:
>> To provide consistent memory usage history using the current
>> cgroup-v2's 'swap' interface, an additional metric expressing the
>> intersection of memory and swap has to be exposed. Basically memsw is
>> the union of memory and swap. So, if that additional metric can be
>
> Exposing anonymous pages with swap backing sounds pretty trivial.
>
>> used to find the union. However for consistent memory limit
>> enforcement, I don't think there is an easy way to use current 'swap'
>> interface.
>
> Can you please go into details on why this is important? I get that
> you can't do it as easily w/o memsw but I don't understand why this is
> a critical feature. Why is that?
>
Making the runtime environment, an invariant is very critical to make
the management of a job easier whose instances run on different
clusters across the world. Some clusters might have different type of
swaps installed while some might not have one at all and the
availability of the swap can be dynamic (i.e. swap medium outage).
So, if users want to run multiple instances of a job across multiple
clusters, they should be able to specify the limits of their jobs
irrespective of the knowledge of cluster. The best case would be they
just submits their jobs without any config and the system figures out
the right limit and enforce that. And to figure out the right limit
and enforcing it, the consistent memory usage history and consistent
memory limit enforcement is very critical.
thanks,
Shakeel
Powered by blists - more mailing lists