lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Nov 2014 12:17:49 +0300
From:	Vladimir Davydov <vdavydov@...allels.com>
To:	Christoph Lameter <cl@...ux.com>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>,
	Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>, <linux-mm@...ck.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -mm 8/8] slab: recharge slab pages to the allocating
 memory cgroup

Hi Christoph,

On Wed, Nov 05, 2014 at 12:43:31PM -0600, Christoph Lameter wrote:
> On Mon, 3 Nov 2014, Vladimir Davydov wrote:
> 
> > +static __always_inline void slab_free(struct kmem_cache *cachep, void *objp);
> > +
> >  static __always_inline void *
> >  slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
> >  		   unsigned long caller)
> > @@ -3185,6 +3187,10 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
> >  		kmemcheck_slab_alloc(cachep, flags, ptr, cachep->object_size);
> >  		if (unlikely(flags & __GFP_ZERO))
> >  			memset(ptr, 0, cachep->object_size);
> > +		if (unlikely(memcg_kmem_recharge_slab(ptr, flags))) {
> > +			slab_free(cachep, ptr);
> > +			ptr = NULL;
> > +		}
> >  	}
> >
> >  	return ptr;
> > @@ -3250,6 +3256,10 @@ slab_alloc(struct kmem_cache *cachep, gfp_t flags, unsigned long caller)
> >  		kmemcheck_slab_alloc(cachep, flags, objp, cachep->object_size);
> >  		if (unlikely(flags & __GFP_ZERO))
> >  			memset(objp, 0, cachep->object_size);
> > +		if (unlikely(memcg_kmem_recharge_slab(objp, flags))) {
> > +			slab_free(cachep, objp);
> > +			objp = NULL;
> > +		}
> >  	}
> >
> 
> Please do not add code to the hotpaths if its avoidable. Can you charge
> the full slab only when allocated please?

I call memcg_kmem_recharge_slab only on alloc path. Free path isn't
touched. The overhead added is one function call. The function only
reads and compares two pointers under RCU most of time. This is
comparable to the overhead introduced by memcg_kmem_get_cache, which is
called in slab_alloc/slab_alloc_node earlier.

Anyways, if you think this is unacceptable, I don't mind dropping the
whole patch set and thinking more on how to fix this per-memcg caches
trickery. What do you think?

Thanks,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ