lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <54061505.8020500@sr71.net>
Date:	Tue, 02 Sep 2014 12:05:41 -0700
From:	Dave Hansen <dave@...1.net>
To:	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.com>,
	Hugh Dickins <hughd@...gle.com>, Tejun Heo <tj@...nel.org>,
	Vladimir Davydov <vdavydov@...allels.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux-MM <linux-mm@...ck.org>
Subject: regression caused by cgroups optimization in 3.17-rc2

I'm seeing a pretty large regression in 3.17-rc2 vs 3.16 coming from the
memory cgroups code.  This is on a kernel with cgroups enabled at
compile time, but not _used_ for anything.  See the green lines in the
graph:

	https://www.sr71.net/~dave/intel/regression-from-05b843012.png

The workload is a little parallel microbenchmark doing page faults:

> https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault2.c

The hardware is an 8-socket Westmere box with 160 hardware threads.  For
some reason, this does not affect the version of the microbenchmark
which is doing completely anonymous page faults.

I bisected it down to this commit:

> commit 05b8430123359886ef6a4146fba384e30d771b3f
> Author: Johannes Weiner <hannes@...xchg.org>
> Date:   Wed Aug 6 16:05:59 2014 -0700
> 
>     mm: memcontrol: use root_mem_cgroup res_counter
>     
>     Due to an old optimization to keep expensive res_counter changes at a
>     minimum, the root_mem_cgroup res_counter is never charged; there is no
>     limit at that level anyway, and any statistics can be generated on
>     demand by summing up the counters of all other cgroups.
>     
>     However, with per-cpu charge caches, res_counter operations do not even
>     show up in profiles anymore, so this optimization is no longer
>     necessary.
>     
>     Remove it to simplify the code.

It does not revert cleanly because of the hunks below.  The code in
those hunks was removed, so I tried running without properly merging
them and it spews warnings because counter->usage is seen going negative.

So, it doesn't appear we can quickly revert this.

> --- mm/memcontrol.c
> +++ mm/memcontrol.c
> @@ -3943,7 +3947,7 @@
>          * replacement page, so leave it alone when phasing out the
>          * page that is unused after the migration.
>          */
> -       if (!end_migration)
> +       if (!end_migration && !mem_cgroup_is_root(memcg))
>                 mem_cgroup_do_uncharge(memcg, nr_pages, ctype);
>  
>         return memcg;
> @@ -4076,7 +4080,8 @@
>                  * We uncharge this because swap is freed.  This memcg can
>                  * be obsolete one. We avoid calling css_tryget_online().
>                  */
> -               res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
> +               if (!mem_cgroup_is_root(memcg))
> +                       res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
>                 mem_cgroup_swap_statistics(memcg, false);
>                 css_put(&memcg->css);
>         }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ