lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Aug 2019 14:21:46 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Roman Gushchin <guro@...com>
Cc:     <linux-mm@...ck.org>, Michal Hocko <mhocko@...nel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        <linux-kernel@...r.kernel.org>, <kernel-team@...com>
Subject: Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining

On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@...com> wrote:

> I've noticed that the "slab" value in memory.stat is sometimes 0,
> even if some children memory cgroups have a non-zero "slab" value.
> The following investigation showed that this is the result
> of the kmem_cache reparenting in combination with the per-cpu
> batching of slab vmstats.
> 
> At the offlining some vmstat value may leave in the percpu cache,
> not being propagated upwards by the cgroup hierarchy. It means
> that stats on ancestor levels are lower than actual. Later when
> slab pages are released, the precise number of pages is substracted
> on the parent level, making the value negative. We don't show negative
> values, 0 is printed instead.
> 
> To fix this issue, let's flush percpu slab memcg and lruvec stats
> on memcg offlining. This guarantees that numbers on all ancestor
> levels are accurate and match the actual number of outstanding
> slab pages.
> 

Looks expensive.  How frequently can these functions be called?

> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
>  	return 0;
>  }
>  
> +static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node)
> +{
> +	struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> +	struct mem_cgroup_per_node *pi;
> +	unsigned long recl = 0, unrecl = 0;
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		recl += raw_cpu_read(
> +			pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]);
> +		unrecl += raw_cpu_read(
> +			pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) {
> +		atomic_long_add(recl,
> +				&pi->lruvec_stat[NR_SLAB_RECLAIMABLE]);
> +		atomic_long_add(unrecl,
> +				&pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +}
> +
> +static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg)
> +{
> +	struct mem_cgroup *mi;
> +	unsigned long recl = 0, unrecl = 0;
> +	int node, cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		recl += raw_cpu_read(
> +			memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]);
> +		unrecl += raw_cpu_read(
> +			memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
> +		atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]);
> +		atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for_each_node(node)
> +		memcg_flush_slab_node_stats(memcg, node);

This loops across all possible CPUs once for each possible node.  Ouch.

Implementing hotplug handlers in here (which is surprisingly simple)
brings this down to num_online_nodes * num_online_cpus which is, I
think, potentially vastly better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ