[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180827140143.98b65bc7cb32f50245eb9114@linux-foundation.org>
Date: Mon, 27 Aug 2018 14:01:43 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Roman Gushchin <guro@...com>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<kernel-team@...com>, Shakeel Butt <shakeelb@...gle.com>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Andy Lutomirski <luto@...nel.org>,
Konstantin Khlebnikov <koct9i@...il.com>,
Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v3 1/3] mm: rework memcg kernel stack accounting
On Mon, 27 Aug 2018 09:26:19 -0700 Roman Gushchin <guro@...com> wrote:
> If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> stack pages are charged against corresponding memory cgroups
> on allocation and uncharged on releasing them.
>
> The problem is that we do cache kernel stacks in small
> per-cpu caches and do reuse them for new tasks, which can
> belong to different memory cgroups.
>
> Each stack page still holds a reference to the original cgroup,
> so the cgroup can't be released until the vmap area is released.
>
> To make this happen we need more than two subsequent exits
> without forks in between on the current cpu, which makes it
> very unlikely to happen. As a result, I saw a significant number
> of dying cgroups (in theory, up to 2 * number_of_cpu +
> number_of_tasks), which can't be released even by significant
> memory pressure.
>
> As a cgroup structure can take a significant amount of memory
> (first of all, per-cpu data like memcg statistics), it leads
> to a noticeable waste of memory.
OK, but this doesn't describe how the patch addresses this issue?
>
> ...
>
> @@ -371,6 +382,35 @@ static void account_kernel_stack(struct task_struct *tsk, int account)
> }
> }
>
> +static int memcg_charge_kernel_stack(struct task_struct *tsk)
> +{
> +#ifdef CONFIG_VMAP_STACK
> + struct vm_struct *vm = task_stack_vm_area(tsk);
> + int ret;
> +
> + if (vm) {
> + int i;
> +
> + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
Can we ever have THREAD_SIZE < PAGE_SIZE? 64k pages?
> + /*
> + * If memcg_kmem_charge() fails, page->mem_cgroup
> + * pointer is NULL, and both memcg_kmem_uncharge()
> + * and mod_memcg_page_state() in free_thread_stack()
> + * will ignore this page. So it's safe.
> + */
> + ret = memcg_kmem_charge(vm->pages[i], GFP_KERNEL, 0);
> + if (ret)
> + return ret;
> +
> + mod_memcg_page_state(vm->pages[i],
> + MEMCG_KERNEL_STACK_KB,
> + PAGE_SIZE / 1024);
> + }
> + }
> +#endif
> + return 0;
> +}
>
> ...
>
Powered by blists - more mailing lists