[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151209143258.GA21506@cmpxchg.org>
Date: Wed, 9 Dec 2015 09:32:58 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Vladimir Davydov <vdavydov@...tuozzo.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.cz>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 7/8] mm: memcontrol: account "kmem" consumers in cgroup2
memory controller
On Wed, Dec 09, 2015 at 02:30:38PM +0300, Vladimir Davydov wrote:
> On Tue, Dec 08, 2015 at 01:34:24PM -0500, Johannes Weiner wrote:
> > The original cgroup memory controller has an extension to account slab
> > memory (and other "kernel memory" consumers) in a separate "kmem"
> > counter, once the user set an explicit limit on that "kmem" pool.
> >
> > However, this includes various consumers whose sizes are directly
> > linked to userspace activity. Accounting them as an optional "kmem"
> > extension is problematic for several reasons:
> >
> > 1. It leaves the main memory interface with incomplete semantics. A
> > user who puts their workload into a cgroup and configures a memory
> > limit does not expect us to leave holes in the containment as big
> > as the dentry and inode cache, or the kernel stack pages.
> >
> > 2. If the limit set on this random historical subgroup of consumers is
> > reached, subsequent allocations will fail even when the main memory
> > pool available to the cgroup is not yet exhausted and/or has
> > reclaimable memory in it.
> >
> > 3. Calling it 'kernel memory' is misleading. The dentry and inode
> > caches are no more 'kernel' (or no less 'user') memory than the
> > page cache itself. Treating these consumers as different classes is
> > a historical implementation detail that should not leak to users.
> >
> > So, in addition to page cache, anonymous memory, and network socket
> > memory, account the following memory consumers per default in the
> > cgroup2 memory controller:
> >
> > - threadinfo
> > - task_struct
> > - task_delay_info
> > - pid
> > - cred
> > - mm_struct
> > - vm_area_struct and vm_region (nommu)
> > - anon_vma and anon_vma_chain
> > - signal_struct
> > - sighand_struct
> > - fs_struct
> > - files_struct
> > - fdtable and fdtable->full_fds_bits
> > - dentry and external_name
> > - inode for all filesystems.
> >
> > This should give us reasonable memory isolation for most common
> > workloads out of the box.
> >
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
>
> Acked-by: Vladimir Davydov <vdavydov@...tuozzo.com>
Thank you!
> The patch looks good to me, but I think we still need to add a boot-time
> knob to disable kmem accounting, as we do for sockets:
>
> From: Vladimir Davydov <vdavydov@...tuozzo.com>
> Subject: [PATCH] mm: memcontrol: allow to disable kmem accounting for cgroup2
>
> Kmem accounting might incur overhead that some users can't put up with.
> Besides, the implementation is still considered unstable. So let's
> provide a way to disable it for those users who aren't happy with it.
>
> To disable kmem accounting for cgroup2, pass cgroup.memory=nokmem at
> boot time.
>
> Signed-off-by: Vladimir Davydov <vdavydov@...tuozzo.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
Especially in the early release phases, there might be birthing pain
that users in the field would want to work around. And I'd rather they
can selectively disable problematic parts during the transition than
switching back wholesale to the old cgroup interface.
For me that would be the prime reason: a temporary workaround for
legacy users until we get our stuff sorted out. Unacceptable overhead
or instability would be something we would have to address anyway.
And then it's fine too that the flag continues to use the historic
misnomer "kmem".
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists