[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180425164845.GA7223@castle>
Date: Wed, 25 Apr 2018 17:48:53 +0100
From: Roman Gushchin <guro@...com>
To: Vlastimil Babka <vbabka@...e.cz>
CC: Vijayanand Jitta <vjitta@...eaurora.org>,
vinayak menon <vinayakm.list@...il.com>, <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
<linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<kernel-team@...com>, Linux API <linux-api@...r.kernel.org>
Subject: Re: [PATCH 1/3] mm: introduce NR_INDIRECTLY_RECLAIMABLE_BYTES
On Wed, Apr 25, 2018 at 05:47:26PM +0200, Vlastimil Babka wrote:
> On 04/25/2018 02:52 PM, Roman Gushchin wrote:
> > On Wed, Apr 25, 2018 at 09:19:29AM +0530, Vijayanand Jitta wrote:
> >>>>>> Idk, I don't like the idea of adding a counter outside of the vm counters
> >>>>>> infrastructure, and I definitely wouldn't touch the exposed
> >>>>>> nr_slab_reclaimable and nr_slab_unreclaimable fields.
> >>>>>
> >>>>> We would be just making the reported values more precise wrt reality.
> >>>>
> >>>> It depends on if we believe that only slab memory can be reclaimable
> >>>> or not. If yes, this is true, otherwise not.
> >>>>
> >>>> My guess is that some drivers (e.g. networking) might have buffers,
> >>>> which are reclaimable under mempressure, and are allocated using
> >>>> the page allocator. But I have to look closer...
> >>>>
> >>>
> >>> One such case I have encountered is that of the ION page pool. The page pool
> >>> registers a shrinker. When not in any memory pressure page pool can go high
> >>> and thus cause an mmap to fail when OVERCOMMIT_GUESS is set. I can send
> >>> a patch to account ION page pool pages in NR_INDIRECTLY_RECLAIMABLE_BYTES.
>
> FYI, we have discussed this at LSF/MM and agreed to try the kmalloc
> reclaimable caches idea. The existing counter could then remain for page
> allocator users such as ION. It's a bit weird to have it in bytes and
> not pages then, IMHO. What if we hid it from /proc/vmstat now so it
> doesn't become ABI, and later convert it to page granularity and expose
> it under a name such as "nr_other_reclaimable" ?
I've nothing against hiding it from /proc/vmstat, as long as we keep
the counter in place and the main issue resolved.
Maybe it's better to add nr_reclaimable = nr_slab_reclaimable + nr_other_reclaimable,
which will have a simpler meaning that nr_other_reclaimable (what is other?).
Thanks!
Powered by blists - more mailing lists