lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151118162256.GK19145@dhcp22.suse.cz>
Date:	Wed, 18 Nov 2015 17:22:56 +0100
From:	Michal Hocko <mhocko@...nel.org>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	David Miller <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Tejun Heo <tj@...nel.org>, netdev@...r.kernel.org,
	linux-mm@...ck.org, cgroups@...r.kernel.org,
	linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH 13/14] mm: memcontrol: account socket memory in unified
 hierarchy memory controller

On Mon 16-11-15 13:18:10, Johannes Weiner wrote:
> On Mon, Nov 16, 2015 at 04:59:25PM +0100, Michal Hocko wrote:
> > On Thu 12-11-15 18:41:32, Johannes Weiner wrote:
> > > Socket memory can be a significant share of overall memory consumed by
> > > common workloads. In order to provide reasonable resource isolation in
> > > the unified hierarchy, this type of memory needs to be included in the
> > > tracking/accounting of a cgroup under active memory resource control.
> > > 
> > > Overhead is only incurred when a non-root control group is created AND
> > > the memory controller is instructed to track and account the memory
> > > footprint of that group. cgroup.memory=nosocket can be specified on
> > > the boot commandline to override any runtime configuration and
> > > forcibly exclude socket memory from active memory resource control.
> > 
> > Do you have any numbers about the overhead?
> 
> Hm? Performance numbers make sense when you have a specific scenario
> and a theory on how to optimize the implementation for it.

The fact that there was a strong push to use static branches to put
the code out of line to reduce an overhead before the feature was
merged shows that people are sensitive to network performance and that
significant effort has been spent to eliminate it. My point was that you
are enabling the feature for all memcg users in unified hierarchy now
without having a performance impact overview which users can use
to judge whether to keep it enabled or disable before they start seeing
regressions or to make regression easier to track once it happens.

> What load would you test and what would be the baseline to compare it
> to?

It seems like netperf with a stream load running in a memcg with no
limits vs. in root memcg (and no other cgroups) should give at least a
hint about the runtime overhead, no?
-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ