[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121107122813.GA4968@lizard>
Date: Wed, 7 Nov 2012 04:28:13 -0800
From: Anton Vorontsov <anton.vorontsov@...aro.org>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Mel Gorman <mgorman@...e.de>, Pekka Enberg <penberg@...nel.org>,
Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>,
Minchan Kim <minchan@...nel.org>,
Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
John Stultz <john.stultz@...aro.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org,
patches@...aro.org, kernel-team@...roid.com,
linux-man@...r.kernel.org, Glauber Costa <glommer@...allels.com>
Subject: Re: [RFC v3 0/3] vmpressure_fd: Linux VM pressure notifications
On Wed, Nov 07, 2012 at 02:11:10PM +0200, Kirill A. Shutemov wrote:
[...]
> > We can have plenty of "free" memory, of which say 90% will be caches,
> > and say 10% idle. But we do want to differentiate these types of memory
> > (although not going into details about it), i.e. we want to get
> > notified when kernel is reclaiming. And we also want to know when the
> > memory comes from swapping others' pages out (well, actually we don't
> > call it swap, it's "new allocations cost becomes high" -- it might be a
> > result of many factors (swapping, fragmentation, etc.) -- and userland
> > might analyze the situation when this happens).
> >
> > Exposing all the VM details to userland is not an option
>
> IIUC, you want MemFree + Buffers + Cached + SwapCached, right?
> It's already exposed to userspace.
How? If you mean vmstat, then no, that interface is not efficient at all:
we have to poll it from userland, which is no go for embedded (although,
as a workaround it can be done via deferrable timers in userland, which I
posted a few months ago).
But even with polling vmstat via deferrable timers, it leaves us with the
ugly timers-based approach (and no way to catch the pre-OOM conditions).
With vmpressure_fd() we have the synchronous notifications right from the
core (upon which, you can, if you want to, analyze the vmstat).
>> 2. The last time I checked, cgroups memory controller did not (and I guess
>> still does not) not account kernel-owned slabs. I asked several times
>> why so, but nobody answered.
>
> Almost there. Glauber works on it.
It's good to hear, but still, the number of "used KBs" is a bad (or
irrelevant) metric for the pressure. We'd still need to analyze the memory
in more details, and "'limit - used' KBs" doesn't tell us anything about
the cost of the available memory.
Thanks,
Anton.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists