[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFy+iPiL1YbjjapV@cmpxchg.org>
Date: Thu, 25 Mar 2021 12:47:04 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...e.com>
Cc: Arjun Roy <arjunroy@...gle.com>,
Arjun Roy <arjunroy.kdev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>, Linux MM <linux-mm@...ck.org>,
Shakeel Butt <shakeelb@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Yang Shi <shy828301@...il.com>, Roman Gushchin <guro@...com>
Subject: Re: [mm, net-next v2] mm: net: memcg accounting for TCP rx zerocopy
On Thu, Mar 25, 2021 at 10:02:28AM +0100, Michal Hocko wrote:
> On Wed 24-03-21 15:49:15, Arjun Roy wrote:
> > On Wed, Mar 24, 2021 at 2:24 PM Johannes Weiner <hannes@...xchg.org> wrote:
> > >
> > > On Wed, Mar 24, 2021 at 10:12:46AM +0100, Michal Hocko wrote:
> > > > On Tue 23-03-21 11:47:54, Arjun Roy wrote:
> > > > > On Tue, Mar 23, 2021 at 7:34 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > > >
> > > > > > On Wed 17-03-21 18:12:55, Johannes Weiner wrote:
> > > > > > [...]
> > > > > > > Here is an idea of how it could work:
> > > > > > >
> > > > > > > struct page already has
> > > > > > >
> > > > > > > struct { /* page_pool used by netstack */
> > > > > > > /**
> > > > > > > * @dma_addr: might require a 64-bit value even on
> > > > > > > * 32-bit architectures.
> > > > > > > */
> > > > > > > dma_addr_t dma_addr;
> > > > > > > };
> > > > > > >
> > > > > > > and as you can see from its union neighbors, there is quite a bit more
> > > > > > > room to store private data necessary for the page pool.
> > > > > > >
> > > > > > > When a page's refcount hits zero and it's a networking page, we can
> > > > > > > feed it back to the page pool instead of the page allocator.
> > > > > > >
> > > > > > > From a first look, we should be able to use the PG_owner_priv_1 page
> > > > > > > flag for network pages (see how this flag is overloaded, we can add a
> > > > > > > PG_network alias). With this, we can identify the page in __put_page()
> > > > > > > and __release_page(). These functions are already aware of different
> > > > > > > types of pages and do their respective cleanup handling. We can
> > > > > > > similarly make network a first-class citizen and hand pages back to
> > > > > > > the network allocator from in there.
> > > > > >
> > > > > > For compound pages we have a concept of destructors. Maybe we can extend
> > > > > > that for order-0 pages as well. The struct page is heavily packed and
> > > > > > compound_dtor shares the storage without other metadata
> > > > > > int pages; /* 16 4 */
> > > > > > unsigned char compound_dtor; /* 16 1 */
> > > > > > atomic_t hpage_pinned_refcount; /* 16 4 */
> > > > > > pgtable_t pmd_huge_pte; /* 16 8 */
> > > > > > void * zone_device_data; /* 16 8 */
> > > > > >
> > > > > > But none of those should really require to be valid when a page is freed
> > > > > > unless I am missing something. It would really require to check their
> > > > > > users whether they can leave the state behind. But if we can establish a
> > > > > > contract that compound_dtor can be always valid when a page is freed
> > > > > > this would be really a nice and useful abstraction because you wouldn't
> > > > > > have to care about the specific type of page.
> > >
> > > Yeah technically nobody should leave these fields behind, but it
> > > sounds pretty awkward to manage an overloaded destructor with a
> > > refcounted object:
> > >
> > > Either every put would have to check ref==1 before to see if it will
> > > be the one to free the page, and then set up the destructor before
> > > putting the final ref. But that means we can't support lockless
> > > tryget() schemes like we have in the page cache with a destructor.
>
> I do not follow the ref==1 part. I mean to use the hugetlb model where
> the destructore is configured for the whole lifetime until the page is
> freed back to the allocator (see below).
That only works if the destructor field doesn't overlap with a member
the page type itself doesn't want to use. Page types that do want to
use it would need to keep that field exclusive.
We couldn't use it for LRU pages e.g. because it overlaps with the
lru.next pointer. But if we bother with a 'generic' destructor for
order-0 pages the LRU pages would actually be a prime candidate for a
destructor: lru removal, memcg uncharging, page waiter cleanup...
The field is also kind of wasteful. There are only a few different
dtors but it occupies an entire byte. Page types don't necessarily
have other bytes they could pair it with, so it might often take up a
whole word. This is all fine with compound pages because we have all
this space in the tail pages. It's not good in order-0 pages.
So again, yes it would be nice to have generic destructors, but I just
don't see how it's practical.
Making network pages first-class objects instead makes a lot more
sense to me for the time being. It's not like it's some small random
driver usecase - it's the network stack!
> > Ah, I think I see what you were getting at with your prior email - at
> > first I thought your suggestion was that, since the driver may have
> > its own refcount, every put would need to check ref == 1 and call into
> > the driver if need be.
> >
> > Instead, and correct me if I'm wrong, it seems like what you're advocating is:
> > 1) The (opted in) driver no longer hangs onto the ref,
> > 2) Now refcount can go all the way to 0,
> > 3) And when it does, due to the special destructor this page has, it
> > goes back to the driver, rather than the system?
Yes, correct.
> > > > If you are going to claim a page flag then it would be much better to
> > > > have it more generic. Flags are really scarce and if all you care about
> > > > is PageHasDestructor() and provide one via page->dtor then the similar
> > > > mechanism can be reused by somebody else. Or does anything prevent that?
> > >
> > > I was suggesting to alias PG_owner_priv_1, which currently isn't used
> > > on network pages. We don't need to allocate a brandnew page flag.
> > >
> >
> > Just to be certain, is there any danger of having a page, that would
> > not be a network driver page originally, being inside __put_page(),
> > such that PG_owner_priv_1 is set (but with one of its other overloaded
> > meanings)?
>
> Yeah this is a real question. And from what Johannes is saying this
> might pose some problem for this specific flags. I still need to check
> all the users to be certain. One thing is clear though. PG_owner_priv_1
> is not a part of PAGE_FLAGS_CHECK_AT_FREE so it is not checked when a
> page is freed so it is possible that the flag is left behind when
> somebody does final put_page which makes things harder.
The swapcache holds a reference, so PG_swapcache is never set on the
final put.
PG_checked refers to the data stored in file pages. It's possible not
all filesystems clear it properly on truncate right now, but they
don't need it once the data itself has become invalid. I double
checked with Chris Mason.
PG_pinned pages are Xen pagetables, they aren't actually
refcounted. PG_xen_remapped pages are dma-remaps and aren't refcounted
either. PG_foreign pages appear refcounted, but AFAICS the foreign map
state is against a table that holds refs, so can't be set on put.
I'll audit the PageChecked sites more closely and send a patch to
tighten it up and add PG_owner_priv_1 to PAGE_FLAGS_CHECK_AT_FREE.
That would be better anyway.
> But that would be the case even when the flag is used for network
> specific handling because you cannot tell it from other potential users
> who are outside of networking...
For non-refcounted pages, we'll catch it in the page allocator.
For refcounted pages a bug is a bit trickier, but I don't think worse
than the baseline risk of aliased page flags in the first place, and
likely the network destructor would crash on a non-network page
(assuming there is a driver-specific callback/ops struct in there).
Powered by blists - more mailing lists