lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200527195614.GC47905@cmpxchg.org>
Date:   Wed, 27 May 2020 15:56:14 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Roman Gushchin <guro@...com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Michal Hocko <mhocko@...nel.org>,
        Shakeel Butt <shakeelb@...gle.com>, linux-mm@...ck.org,
        Vlastimil Babka <vbabka@...e.cz>, kernel-team@...com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 06/19] mm: memcg/slab: obj_cgroup API

On Tue, May 26, 2020 at 02:42:14PM -0700, Roman Gushchin wrote:
> @@ -257,6 +257,98 @@ struct cgroup_subsys_state *vmpressure_to_css(struct vmpressure *vmpr)
>  }
>  
>  #ifdef CONFIG_MEMCG_KMEM
> +extern spinlock_t css_set_lock;
> +
> +static void obj_cgroup_release(struct percpu_ref *ref)
> +{
> +	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> +	struct mem_cgroup *memcg;
> +	unsigned int nr_bytes;
> +	unsigned int nr_pages;
> +	unsigned long flags;
> +
> +	/*
> +	 * At this point all allocated objects are freed, and
> +	 * objcg->nr_charged_bytes can't have an arbitrary byte value.
> +	 * However, it can be PAGE_SIZE or (x * PAGE_SIZE).
> +	 *
> +	 * The following sequence can lead to it:
> +	 * 1) CPU0: objcg == stock->cached_objcg
> +	 * 2) CPU1: we do a small allocation (e.g. 92 bytes),
> +	 *          PAGE_SIZE bytes are charged
> +	 * 3) CPU1: a process from another memcg is allocating something,
> +	 *          the stock if flushed,
> +	 *          objcg->nr_charged_bytes = PAGE_SIZE - 92
> +	 * 5) CPU0: we do release this object,
> +	 *          92 bytes are added to stock->nr_bytes
> +	 * 6) CPU0: stock is flushed,
> +	 *          92 bytes are added to objcg->nr_charged_bytes
> +	 *
> +	 * In the result, nr_charged_bytes == PAGE_SIZE.
> +	 * This page will be uncharged in obj_cgroup_release().
> +	 */

Thanks for adding this.

> +int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
> +{
> +	struct mem_cgroup *memcg;
> +	unsigned int nr_pages, nr_bytes;
> +	int ret;
> +
> +	if (consume_obj_stock(objcg, size))
> +		return 0;
> +
> +	rcu_read_lock();
> +	memcg = obj_cgroup_memcg(objcg);
> +	css_get(&memcg->css);
> +	rcu_read_unlock();

Can you please also add the comment here I mentioned last time? To
explain why we're not checking objcg->nr_charged_bytes if we have
already pre-allocated bytes that could satisfy the allocation.

Otherwise, looks good to me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ