[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFgi+Dph-dcDAvGQXwgeZVDBhok1UQ3X5kxFEfPQnxSSg@mail.gmail.com>
Date: Thu, 31 Mar 2022 12:26:27 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Zhaoyang Huang <huangzhaoyang@...il.com>,
"zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
"open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
cgroups mailinglist <cgroups@...r.kernel.org>,
Ke Wang <ke.wang@...soc.com>
Subject: Re: [RFC PATCH] cgroup: introduce dynamic protection for memcg
On Thu, Mar 31, 2022 at 4:35 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Thu 31-03-22 19:18:58, Zhaoyang Huang wrote:
> > On Thu, Mar 31, 2022 at 5:01 PM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Thu 31-03-22 16:00:56, zhaoyang.huang wrote:
> > > > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > > >
> > > > For some kind of memcg, the usage is varies greatly from scenarios. Such as
> > > > multimedia app could have the usage range from 50MB to 500MB, which generated
> > > > by loading an special algorithm into its virtual address space and make it hard
> > > > to protect the expanded usage without userspace's interaction.
> > >
> > > Do I get it correctly that the concern you have is that you do not know
> > > how much memory your workload will need because that depends on some
> > > parameters?
> > right. such as a camera APP will expand the usage from 50MB to 500MB
> > because of launching a special function(face beauty etc need special
> > algorithm)
> > >
> > > > Furthermore, fixed
> > > > memory.low is a little bit against its role of soft protection as it will response
> > > > any system's memory pressure in same way.
> > >
> > > Could you be more specific about this as well?
> > As the camera case above, if we set memory.low as 200MB to keep the
> > APP run smoothly, the system will experience high memory pressure when
> > another high load APP launched simultaneously. I would like to have
> > camera be reclaimed under this scenario.
>
> OK, so you effectivelly want to keep the memory protection when there is
> a "normal" memory pressure but want to relax the protection on other
> high memory utilization situations?
>
> How do you exactly tell a difference between a steady memory pressure
> (say stream IO on the page cache) from "high load APP launched"? Should
> you reduce the protection on the stram IO situation as well?
IIUC what you are implementing here is a "memory allowance boost"
feature and it seems you are implementing it entirely inside the
kernel, while only userspace knows when to apply this boost (say at
app launch time). This does not make sense to me.
>
> [...]
> > > One very important thing that I am missing here is the overall objective of this
> > > tuning. From the above it seems that you want to (ab)use memory->low to
> > > protect some portion of the charged memory and that the protection
> > > shrinks over time depending on the the global PSI metrict and time.
> > > But why this is a good thing?
> > 'Good' means it meets my original goal of keeping the usage during a
> > period of time and responding to the system's memory pressure. For an
> > android like system, memory is almost forever being in a tight status
> > no matter how many RAM it has. What we need from memcg is more than
> > control and grouping, we need it to be more responsive to the system's
> > load and could sacrifice its usage under certain criteria.
>
> Why existing tools/APIs are insufficient for that? You can watch for
> both global and memcg memory pressure including PSI metrics and update
> limits dynamically. Why is it necessary to put such a logic into the
> kernel?
I had exactly the same thought while reading through this.
In Android you would probably need to implement a userspace service
which would temporarily relax the memcg limits when required, monitor
PSI levels and adjust the limits accordingly.
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists