lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9MbkuBDI+08AtgN@tpad>
Date:   Thu, 26 Jan 2023 21:32:18 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Roman Gushchin <roman.gushchin@...ux.dev>,
        Leonardo Brás <leobras@...hat.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Muchun Song <muchun.song@...ux.dev>,
        Andrew Morton <akpm@...ux-foundation.org>,
        cgroups@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining

On Thu, Jan 26, 2023 at 08:20:46PM +0100, Michal Hocko wrote:
> On Thu 26-01-23 15:03:43, Marcelo Tosatti wrote:
> > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote:
> > > On Wed 25-01-23 15:14:48, Roman Gushchin wrote:
> > > > On Wed, Jan 25, 2023 at 03:22:00PM -0300, Marcelo Tosatti wrote:
> > > > > On Wed, Jan 25, 2023 at 08:06:46AM -0300, Leonardo Brás wrote:
> > > > > > On Wed, 2023-01-25 at 09:33 +0100, Michal Hocko wrote:
> > > > > > > On Wed 25-01-23 04:34:57, Leonardo Bras wrote:
> > > > > > > > Disclaimer:
> > > > > > > > a - The cover letter got bigger than expected, so I had to split it in
> > > > > > > >     sections to better organize myself. I am not very confortable with it.
> > > > > > > > b - Performance numbers below did not include patch 5/5 (Remove flags
> > > > > > > >     from memcg_stock_pcp), which could further improve performance for
> > > > > > > >     drain_all_stock(), but I could only notice the optimization at the
> > > > > > > >     last minute.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 0 - Motivation:
> > > > > > > > On current codebase, when drain_all_stock() is ran, it will schedule a
> > > > > > > > drain_local_stock() for each cpu that has a percpu stock associated with a
> > > > > > > > descendant of a given root_memcg.
> > > > 
> > > > Do you know what caused those drain_all_stock() calls? I wonder if we should look
> > > > into why we have many of them and whether we really need them?
> > > > 
> > > > It's either some user's actions (e.g. reducing memory.max), either some memcg
> > > > is entering pre-oom conditions. In the latter case a lot of drain calls can be
> > > > scheduled without a good reason (assuming the cgroup contain multiple tasks running
> > > > on multiple cpus).
> > > 
> > > I believe I've never got a specific answer to that. We
> > > have discussed that in the previous version submission
> > > (20221102020243.522358-1-leobras@...hat.com and specifically
> > > Y2TQLavnLVd4qHMT@...p22.suse.cz). Leonardo has mentioned a mix of RT and
> > > isolcpus. I was wondering about using memcgs in RT workloads because
> > > that just sounds weird but let's say this is the case indeed. 
> > 
> > This could be the case. You can consider an "edge device" where it is
> > necessary to run a RT workload. It might also be useful to run 
> > non realtime applications on the same system.
> > 
> > > Then an RT task or whatever task that is running on an isolated
> > > cpu can have pcp charges.
> > 
> > Usually the RT task (or more specifically the realtime sensitive loop
> > of the application) runs entirely on userspace. But i suppose there
> > could be charges on application startup.
> 
> What is the role of memcg then? If the memory limit is in place and the
> workload doesn't fit in then it will get reclaimed during start up and
> memory would need to be refaulted if not mlocked. If it is mlocked then
> the limit cannot be enforced and the start up would likely fail as a
> result of the memcg oom killer.

1) Application which is not time sensitive executes on isolated CPU,
with memcg control enabled. Per-CPU stock is created.

2) App with memcg control enabled exits, per-CPU stock is not drained.

3) Latency sensitive application starts, isolated per-CPU has stock to
be drained, and:

/*
 * Drains all per-CPU charge caches for given root_memcg resp. subtree
 * of the hierarchy under it.
 */
static void drain_all_stock(struct mem_cgroup *root_memcg)
{
        int cpu, curcpu;

        /* If someone's already draining, avoid adding running more workers. */
        if (!mutex_trylock(&percpu_charge_mutex))
                return;
        /*
         * Notify other cpus that system-wide "drain" is running
         * We do not care about races with the cpu hotplug because cpu down
         * as well as workers from this path always operate on the local
         * per-cpu data. CPU up doesn't touch memcg_stock at all.
         */
        migrate_disable();
        curcpu = smp_processor_id();
        for_each_online_cpu(cpu) {
                struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
                struct mem_cgroup *memcg;
                bool flush = false;

                rcu_read_lock();
                memcg = stock->cached;
                if (memcg && stock->nr_pages &&
                    mem_cgroup_is_descendant(memcg, root_memcg))
                        flush = true;
                else if (obj_stock_flush_required(stock, root_memcg))
                        flush = true;
                rcu_read_unlock();

                if (flush &&
                    !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
                        if (cpu == curcpu)
                                drain_local_stock(&stock->work);
                        else
                                schedule_work_on(cpu, &stock->work);
                }
        }
        migrate_enable();
        mutex_unlock(&percpu_charge_mutex);
}

> [...]
> > > > Overall I'm somewhat resistant to an idea of making generic allocation & free paths slower
> > > > for an improvement of stock draining. It's not a strong objection, but IMO we should avoid
> > > > doing this without a really strong reason.
> > > 
> > > Are you OK with a simple opt out on isolated CPUs? That would make
> > > charges slightly slower (atomic on the hierarchy counters vs. a single
> > > pcp adjustment) but it would guarantee that the isolated workload is
> > > predictable which is the primary objective AFAICS.
> > 
> > This would make isolated CPUs "second class citizens": it would be nice
> > to be able to execute non realtime apps on isolated CPUs as well
> > (think of different periods of time during a day, one where 
> > more realtime apps are required, another where less 
> > realtime apps are required).
> 
> An alternative requires to make the current implementation that is
> lockless to use locks and introduce potential lock contention. This
> could be harmful to regular workloads. Not using pcp caching would make
> a fast path using few atomics rather than local pcp update. That is not
> a terrible cost to pay for special cased workloads which use isolcpus.
> Really we are not talking about a massive cost to be payed. At least
> nobody has shown that in any numbers.
> 
> > Concrete example: think of a computer handling vRAN traffic near a 
> > cell tower. The traffic (therefore amount of processing required
> > by realtime applications) might vary during the day.
> > 
> > User might want to run containers that depend on good memcg charging
> > performance on isolated CPUs.
> 
> What kind of role would memcg play here? Do you want to memory constrain
> that workload?

See example above.

> -- 
> Michal Hocko
> SUSE Labs
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ