[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtX0a3am_PLy3L8pmPW1qU+zwFJqUzzXd3e0Tt3Nen6SpA@mail.gmail.com>
Date: Mon, 25 Jan 2021 11:58:27 +0800
From: Muchun Song <songmuchun@...edance.com>
To: David Rientjes <rientjes@...gle.com>
Cc: Jonathan Corbet <corbet@....net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
"Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
David Hildenbrand <david@...hat.com>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v13 04/12] mm: hugetlb: defer freeing of
HugeTLB pages
On Mon, Jan 25, 2021 at 7:55 AM David Rientjes <rientjes@...gle.com> wrote:
>
>
> On Sun, 17 Jan 2021, Muchun Song wrote:
>
> > In the subsequent patch, we should allocate the vmemmap pages when
> > freeing HugeTLB pages. But update_and_free_page() is always called
> > with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate
> > vmemmap pages. However, we can defer the actual freeing in a kworker
> > to prevent from using GFP_ATOMIC to allocate the vmemmap pages.
> >
> > The update_hpage_vmemmap_workfn() is where the call to allocate
> > vmemmmap pages will be inserted.
> >
>
> I think it's reasonable to assume that userspace can release free hugetlb
> pages from the pool on oom conditions when reclaim has become too
> expensive. This approach now requires that we can allocate vmemmap pages
> in a potential oom condition as a prerequisite for freeing memory, which
> seems less than ideal.
>
> And, by doing this through a kworker, we can presumably get queued behind
> another work item that requires memory to make forward progress in this
> oom condition.
>
> Two thoughts:
>
> - We're going to be freeing the hugetlb page after we can allocate the
> vmemmap pages, so why do we need to allocate with GFP_KERNEL? Can't we
> simply dip into memory reserves using GFP_ATOMIC (and thus can be
> holding hugetlb_lock) because we know we'll be freeing more memory than
> we'll be allocating?
Right.
> I think requiring a GFP_KERNEL allocation to block
> to free memory for vmemmap when we'll be freeing memory ourselves is
> dubious. This simplifies all of this.
Thanks for your thoughts. I just thought that we can go to reclaim
when there is no memory in the system. But we cannot block when
using GFP_KERNEL. Actually, we cannot deal with fail of memory
allocating. In the next patch, I try to sleep 100ms and then try again
to allocate memory when allocating memory fails.
>
> - If the answer is that we actually have to use GFP_KERNEL for other
> reasons, what are your thoughts on pre-allocating the vmemmap as opposed
> to deferring to a kworker? In other words, preallocate the necessary
> memory with GFP_KERNEL and put it on a linked list in struct hstate
> before acquiring hugetlb_lock.
put_page() can be used in an atomic context. Actually, we cannot sleep
in the __free_huge_page(). It seems a little tricky. Right?
>
> > Signed-off-by: Muchun Song <songmuchun@...edance.com>
> > Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
> > ---
> > mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++--
> > mm/hugetlb_vmemmap.c | 12 ---------
> > mm/hugetlb_vmemmap.h | 17 ++++++++++++
> > 3 files changed, 89 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 140135fc8113..c165186ec2cf 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> > unsigned int order) { }
> > #endif
> >
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static void __free_hugepage(struct hstate *h, struct page *page);
> > +
> > +/*
> > + * As update_and_free_page() is always called with holding hugetlb_lock, so we
> > + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the
> > + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate
> > + * the vmemmap pages.
> > + *
> > + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap
> > + * pages will be inserted.
> > + *
> > + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages
> > + * to be freed and frees them one-by-one. As the page->mapping pointer is going
> > + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the
> > + * llist_node structure of a lockless linked list of huge pages to be freed.
> > + */
> > +static LLIST_HEAD(hpage_update_freelist);
> > +
> > +static void update_hpage_vmemmap_workfn(struct work_struct *work)
> > {
> > - int i;
> > + struct llist_node *node;
> > +
> > + node = llist_del_all(&hpage_update_freelist);
> > +
> > + while (node) {
> > + struct page *page;
> > + struct hstate *h;
> > +
> > + page = container_of((struct address_space **)node,
> > + struct page, mapping);
> > + node = node->next;
> > + page->mapping = NULL;
> > + h = page_hstate(page);
> > +
> > + spin_lock(&hugetlb_lock);
> > + __free_hugepage(h, page);
> > + spin_unlock(&hugetlb_lock);
> >
> > + cond_resched();
>
> Wouldn't it be better to hold hugetlb_lock for the iteration rather than
> constantly dropping it and reacquiring it? Use
> cond_resched_lock(&hugetlb_lock) instead?
Great. We can use it. Thanks.
Powered by blists - more mailing lists