[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150506142951.GC29387@esperanza>
Date: Wed, 6 May 2015 17:29:51 +0300
From: Vladimir Davydov <vdavydov@...allels.com>
To: Michal Hocko <mhocko@...e.cz>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Greg Thelen <gthelen@...gle.com>, <linux-mm@...ck.org>,
<cgroups@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] gfp: add __GFP_NOACCOUNT
On Wed, May 06, 2015 at 03:55:20PM +0200, Michal Hocko wrote:
> On Wed 06-05-15 16:25:10, Vladimir Davydov wrote:
> > On Wed, May 06, 2015 at 02:35:41PM +0200, Michal Hocko wrote:
[...]
> > > NOACCOUNT doesn't imply kmem at all so it is not clear who is in charge
> > > of the accounting.
> >
> > IMO it is a benefit. If one day for some reason we want to bypass memcg
> > accounting for some other type of allocation somewhere, we can simply
> > reuse it.
>
> But what if somebody, say a highlevel memory allocator in the kernel,
> want's to (ab)use this flag for its internal purpose as well?
We won't let him :-)
If we take your argument about future (ab)users seriously, we should
also consider what will happen if one wants to use e.g. __GFP_HARDWALL,
which BTW has a generic name too although it's cpuset-specific.
My point is that MEMCG is the only subsystem of the kernel that tries to
do full memory accounting, and there is no point in introducing another
one, because we already have it. So we have full right to reserve
__GFP_NOACCOUNT for our purposes, just like cpuset reserves
__GFP_HARDWALL and kmemcheck __GFP_NOTRACK. Any newcomer must take this
into account.
Thanks,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists