[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200527200313.GD396660@carbon.dhcp.thefacebook.com>
Date: Wed, 27 May 2020 13:03:13 -0700
From: Roman Gushchin <guro@...com>
To: Johannes Weiner <hannes@...xchg.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>, <linux-mm@...ck.org>,
Vlastimil Babka <vbabka@...e.cz>, <kernel-team@...com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v4 06/19] mm: memcg/slab: obj_cgroup API
On Wed, May 27, 2020 at 03:56:14PM -0400, Johannes Weiner wrote:
> On Tue, May 26, 2020 at 02:42:14PM -0700, Roman Gushchin wrote:
> > @@ -257,6 +257,98 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr)
> > }
> >
> > #ifdef CONFIG_MEMCG_KMEM
> > +extern spinlock_t css_set_lock;
> > +
> > +static void obj_cgroup_release(struct percpu_ref *ref)
> > +{
> > + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> > + struct mem_cgroup *memcg;
> > + unsigned int nr_bytes;
> > + unsigned int nr_pages;
> > + unsigned long flags;
> > +
> > + /*
> > + * At this point all allocated objects are freed, and
> > + * objcg->nr_charged_bytes can't have an arbitrary byte value.
> > + * However, it can be PAGE_SIZE or (x * PAGE_SIZE).
> > + *
> > + * The following sequence can lead to it:
> > + * 1) CPU0: objcg == stock->cached_objcg
> > + * 2) CPU1: we do a small allocation (e.g. 92 bytes),
> > + * PAGE_SIZE bytes are charged
> > + * 3) CPU1: a process from another memcg is allocating something,
> > + * the stock if flushed,
> > + * objcg->nr_charged_bytes = PAGE_SIZE - 92
> > + * 5) CPU0: we do release this object,
> > + * 92 bytes are added to stock->nr_bytes
> > + * 6) CPU0: stock is flushed,
> > + * 92 bytes are added to objcg->nr_charged_bytes
> > + *
> > + * In the result, nr_charged_bytes == PAGE_SIZE.
> > + * This page will be uncharged in obj_cgroup_release().
> > + */
>
> Thanks for adding this.
>
> > +int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
> > +{
> > + struct mem_cgroup *memcg;
> > + unsigned int nr_pages, nr_bytes;
> > + int ret;
> > +
> > + if (consume_obj_stock(objcg, size))
> > + return 0;
> > +
> > + rcu_read_lock();
> > + memcg = obj_cgroup_memcg(objcg);
> > + css_get(&memcg->css);
> > + rcu_read_unlock();
>
> Can you please also add the comment here I mentioned last time? To
> explain why we're not checking objcg->nr_charged_bytes if we have
> already pre-allocated bytes that could satisfy the allocation.
I've added a comment into drain_obj_stock() where nr_charged_bytes is bumped.
But I can add another on here, np.
>
> Otherwise, looks good to me.
Thanks!
Powered by blists - more mailing lists