lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Oct 2019 20:00:32 +0000
From:   Roman Gushchin <guro@...com>
To:     Johannes Weiner <hannes@...xchg.org>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Michal Hocko <mhocko@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Kernel Team <Kernel-team@...com>,
        "Shakeel Butt" <shakeelb@...gle.com>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        "Waiman Long" <longman@...hat.com>,
        Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 09/16] mm: memcg/slab: charge individual slab objects
 instead of pages

On Fri, Oct 25, 2019 at 03:41:18PM -0400, Johannes Weiner wrote:
> On Thu, Oct 17, 2019 at 05:28:13PM -0700, Roman Gushchin wrote:
> > +static inline struct kmem_cache *memcg_slab_pre_alloc_hook(struct kmem_cache *s,
> > +						struct mem_cgroup **memcgp,
> > +						size_t size, gfp_t flags)
> > +{
> > +	struct kmem_cache *cachep;
> > +
> > +	cachep = memcg_kmem_get_cache(s, memcgp);
> > +	if (is_root_cache(cachep))
> > +		return s;
> > +
> > +	if (__memcg_kmem_charge_subpage(*memcgp, size * s->size, flags)) {
> > +		mem_cgroup_put(*memcgp);
> > +		memcg_kmem_put_cache(cachep);
> > +		cachep = NULL;
> > +	}
> > +
> > +	return cachep;
> > +}
> > +
> >  static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
> >  					      struct mem_cgroup *memcg,
> >  					      size_t size, void **p)
> >  {
> >  	struct mem_cgroup_ptr *memcg_ptr;
> > +	struct lruvec *lruvec;
> >  	struct page *page;
> >  	unsigned long off;
> >  	size_t i;
> > @@ -439,6 +393,11 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
> >  			off = obj_to_index(s, page, p[i]);
> >  			mem_cgroup_ptr_get(memcg_ptr);
> >  			page->mem_cgroup_vec[off] = memcg_ptr;
> > +			lruvec = mem_cgroup_lruvec(page_pgdat(page), memcg);
> > +			mod_lruvec_memcg_state(lruvec, cache_vmstat_idx(s),
> > +					       s->size);
> > +		} else {
> > +			__memcg_kmem_uncharge_subpage(memcg, s->size);
> >  		}
> >  	}
> >  	mem_cgroup_ptr_put(memcg_ptr);
> 
> The memcg_ptr as a collection vessel for object references makes a lot
> of sense. But this code showcases that it should be a first-class
> memory tracking API that the allocator interacts with, rather than
> having to deal with a combination of memcg_ptr and memcg.
> 
> In the two hunks here, on one hand we charge bytes to the memcg
> object, and then handle all the refcounting through a different
> bucketing object. To support that in the first place, we have to
> overload the memcg API all the way down to try_charge() to support
> bytes and pages. This is difficult to follow throughout all layers.
> 
> What would be better is for this to be an abstraction layer for a
> subpage object tracker that sits on top of the memcg page tracker -
> not unlike the page allocator and the slab allocators themselves.
> 
> And then the slab allocator would only interact with the subpage
> object tracker, and the object tracker would deal with the memcg page
> tracker under the hood.

Yes, the idea makes total sense to me. I'm not sure I like the new naming
(I have to spend some time with it first), but the idea of moving
stocks and leftovers to the memcg_ptr/obj_cgroup level is really good.

I'll include something based on your proposal into the next version
of the patchset.

Thank you!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ