[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5L1C1DV_DVs9O3xZm6CJnriunAoj89YLDdCp7ef5yBxA@mail.gmail.com>
Date: Mon, 22 Nov 2021 10:40:54 -0800
From: Shakeel Butt <shakeelb@...gle.com>
To: David Hildenbrand <david@...hat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Yang Shi <shy828301@...il.com>, Zi Yan <ziy@...dia.com>,
Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: split thp synchronously on MADV_DONTNEED
On Mon, Nov 22, 2021 at 12:32 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 20.11.21 21:12, Shakeel Butt wrote:
> > Many applications do sophisticated management of their heap memory for
> > better performance but with low cost. We have a bunch of such
> > applications running on our production and examples include caching and
> > data storage services. These applications keep their hot data on the
> > THPs for better performance and release the cold data through
> > MADV_DONTNEED to keep the memory cost low.
> >
> > The kernel defers the split and release of THPs until there is memory
> > pressure. This causes complicates the memory management of these
> > sophisticated applications which then needs to look into low level
> > kernel handling of THPs to better gauge their headroom for expansion.
>
> Can you elaborate a bit on that point? What exactly does such an
> application do? I would have assumed that it's mostly transparent for
> applications.
>
The application monitors its cgroup usage to decide if it can expand
the memory footprint or release some (unneeded/cold) buffer. It
releases madvise(MADV_DONTNEED) to release the memory which basically
puts the THP into defer list. These deferred THPs are still charged to
the cgroup which leads to bloated usage read by the application and
making wrong decisions. Internally we added a cgroup interface to
trigger the split of deferred THPs for that cgroup but this is hacky
and exposing kernel internals to users. I want to solve this problem
in a more general way for the users.
> > In
> > addition these applications are very latency sensitive and would prefer
> > to not face memory reclaim due to non-deterministic nature of reclaim.
>
> That makes sense.
>
> >
> > This patch let such applications not worry about the low level handling
> > of THPs in the kernel and splits the THPs synchronously on
> > MADV_DONTNEED.
>
> The main user I'm concerned about is virtio-balloon, which ends up
> discarding VM memory via MADV_DONTNEED when inflating the balloon in the
> guest in 4k granularity, but also during "free page reporting"
> continuously when e.g., a 2MiB page becomes free in the guest. We want
> both activities to be fast, and especially during "free page reporting",
> to defer any heavy work.
Thanks for the info. What is the source virtio-balloon used for free pages?
>
> Do we have a performance evaluation how much overhead is added e.g., for
> a single 4k MADV_DONTNEED call on a THP or on a MADV_DONTNEED call that
> covers the whole THP?
I did a simple benchmark of madvise(MADV_DONTNEED) on 10000 THPs on
x86 for both settings you suggested. I don't see any statistically
significant difference with and without the patch. Let me know if you
want me to try something else.
Thanks for the review.
Shakeel
Powered by blists - more mailing lists