[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkYtJcC6zYqy5vWeaB=1Rv16gY=q+OG7vF_Oc=DmVk24GA@mail.gmail.com>
Date: Thu, 15 Jun 2023 18:44:46 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: 程垲涛 Chengkaitao Cheng
<chengkaitao@...iglobal.com>, "tj@...nel.org" <tj@...nel.org>,
"lizefan.x@...edance.com" <lizefan.x@...edance.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"corbet@....net" <corbet@....net>,
"roman.gushchin@...ux.dev" <roman.gushchin@...ux.dev>,
"shakeelb@...gle.com" <shakeelb@...gle.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"brauner@...nel.org" <brauner@...nel.org>,
"muchun.song@...ux.dev" <muchun.song@...ux.dev>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"zhengqi.arch@...edance.com" <zhengqi.arch@...edance.com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"Liam.Howlett@...cle.com" <Liam.Howlett@...cle.com>,
"chengzhihao1@...wei.com" <chengzhihao1@...wei.com>,
"pilgrimtao@...il.com" <pilgrimtao@...il.com>,
"haolee.swjtu@...il.com" <haolee.swjtu@...il.com>,
"yuzhao@...gle.com" <yuzhao@...gle.com>,
"willy@...radead.org" <willy@...radead.org>,
"vasily.averin@...ux.dev" <vasily.averin@...ux.dev>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"surenb@...gle.com" <surenb@...gle.com>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"mcgrof@...nel.org" <mcgrof@...nel.org>,
"sujiaxun@...ontech.com" <sujiaxun@...ontech.com>,
"feng.tang@...el.com" <feng.tang@...el.com>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v3 0/2] memcontrol: support cgroup level OOM protection
On Thu, Jun 15, 2023 at 3:39 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Tue 13-06-23 13:24:24, Yosry Ahmed wrote:
> > On Tue, Jun 13, 2023 at 5:06 AM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Tue 13-06-23 01:36:51, Yosry Ahmed wrote:
> > > > +David Rientjes
> > > >
> > > > On Tue, Jun 13, 2023 at 1:27 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > >
> > > > > On Sun 04-06-23 01:25:42, Yosry Ahmed wrote:
> > > > > [...]
> > > > > > There has been a parallel discussion in the cover letter thread of v4
> > > > > > [1]. To summarize, at Google, we have been using OOM scores to
> > > > > > describe different job priorities in a more explicit way -- regardless
> > > > > > of memory usage. It is strictly priority-based OOM killing. Ties are
> > > > > > broken based on memory usage.
> > > > > >
> > > > > > We understand that something like memory.oom.protect has an advantage
> > > > > > in the sense that you can skip killing a process if you know that it
> > > > > > won't free enough memory anyway, but for an environment where multiple
> > > > > > jobs of different priorities are running, we find it crucial to be
> > > > > > able to define strict ordering. Some jobs are simply more important
> > > > > > than others, regardless of their memory usage.
> > > > >
> > > > > I do remember that discussion. I am not a great fan of simple priority
> > > > > based interfaces TBH. It sounds as an easy interface but it hits
> > > > > complications as soon as you try to define a proper/sensible
> > > > > hierarchical semantic. I can see how they might work on leaf memcgs with
> > > > > statically assigned priorities but that sounds like a very narrow
> > > > > usecase IMHO.
> > > >
> > > > Do you mind elaborating the problem with the hierarchical semantics?
> > >
> > > Well, let me be more specific. If you have a simple hierarchical numeric
> > > enforcement (assume higher priority more likely to be chosen and the
> > > effective priority to be max(self, max(parents)) then the semantic
> > > itslef is straightforward.
> > >
> > > I am not really sure about the practical manageability though. I have
> > > hard time to imagine priority assignment on something like a shared
> > > workload with a more complex hierarchy. For example:
> > > root
> > > / | \
> > > cont_A cont_B cont_C
> > >
> > > each container running its workload with own hierarchy structures that
> > > might be rather dynamic during the lifetime. In order to have a
> > > predictable OOM behavior you need to watch and reassign priorities all
> > > the time, no?
> >
> > In our case we don't really manage the entire hierarchy in a
> > centralized fashion. Each container gets a score based on their
> > relative priority, and each container is free to set scores within its
> > subcontainers if needed. Isn't this what the hierarchy is all about?
> > Each parent only cares about its direct children. On the system level,
> > we care about the priority ordering of containers. Ordering within
> > containers can be deferred to containers.
>
> This really depends on the workload. This might be working for your
> setup but as I've said above, many workloads would be struggling with
> re-prioritizing as soon as a new workload is started and oom priorities
> would need to be reorganized as a result. The setup is just too static
> to be generally useful IMHO.
> You can avoid that by essentially making mid-layers no priority and only
> rely on leaf memcgs when this would become more flexible. This is
> something even more complicated with the top-down approach.
I agree that other setups may find it more difficult if one entity
needs to manage the entire tree, although if the scores range is large
enough, I don't really think it's that static. When a new workload is
started you decide what its priority is compared to the existing
workloads and set its score as such. We use a range of scores from 0
to 10,000 (and it can easily be larger), so it's easy to assign new
scores without reorganizing the existing scores.
>
> That being said, I can see workloads which could benefit from a
> priority (essentially user spaced controlled oom pre-selection) based
> policy. But there are many other policies like that that would be
> usecase specific and not generic enough so I do not think this is worth
> a generic interface and would fall into BPF or alike based policies.
That's reasonable. I can't speak for other folks. Perhaps no single
policy will be generic enough, and we should focus on enabling
customized policy. Perhaps other userspace OOM agents can benefit from
this as well.
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists