[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200911213602.GC1163084@carbon.dhcp.thefacebook.com>
Date: Fri, 11 Sep 2020 14:36:02 -0700
From: Roman Gushchin <guro@...com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Kernel Team <kernel-team@...com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH rfc 0/5] mm: allow mapping accounted kernel pages to
userspace
On Fri, Sep 11, 2020 at 10:34:57AM -0700, Shakeel Butt wrote:
> On Fri, Sep 11, 2020 at 10:34 AM Shakeel Butt <shakeelb@...gle.com> wrote:
> >
> > On Thu, Sep 10, 2020 at 1:27 PM Roman Gushchin <guro@...com> wrote:
> > >
> > > Currently a non-slab kernel page which has been charged to a memory
> > > cgroup can't be mapped to userspace. The underlying reason is simple:
> > > PageKmemcg flag is defined as a page type (like buddy, offline, etc),
> > > so it takes a bit from a page->mapped counter. Pages with a type set
> > > can't be mapped to userspace.
> > >
> > > But in general the kmemcg flag has nothing to do with mapping to
> > > userspace. It only means that the page has been accounted by the page
> > > allocator, so it has to be properly uncharged on release.
> > >
> > > Some bpf maps are mapping the vmalloc-based memory to userspace, and
> > > their memory can't be accounted because of this implementation detail.
> > >
> > > This patchset removes this limitation by moving the PageKmemcg flag
> > > into one of the free bits of the page->mem_cgroup pointer. Also it
> > > formalizes all accesses to the page->mem_cgroup and page->obj_cgroups
> > > using new helpers, adds several checks and removes a couple of obsolete
> > > functions. As the result the code became more robust with fewer
> > > open-coded bits tricks.
> > >
> > > The first patch in the series is a bugfix, which I already sent separately.
> > > Including it in rfc to make the whole series compile.
> > >
> > >
> >
> > This would be a really beneficial feature. I tried to fix the similar
> > issue for kvm_vcpu_mmap [1] but using the actual page flag bit but
> > your solution would be non controversial.
> >
> > I think this might also help the accounting of TCP zerocopy receive
> > mmapped memory. The memory is charged in skbs but once it is mmapped,
> > the skbs get uncharged and we can have a very large amount of
> > uncharged memory.
> >
> > I will take a look at the series.
>
> [1] https://lore.kernel.org/kvm/20190329012836.47013-1-shakeelb@google.com/
Cool, thank you for the link!
It's very nice that this feature is useful behind the bpf case.
Thanks!
Powered by blists - more mailing lists