[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191217185557.tgtsvaad24j745gf@linux-p48b>
Date: Tue, 17 Dec 2019 10:55:57 -0800
From: Davidlohr Bueso <dave@...olabs.net>
To: Waiman Long <longman@...hat.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Matthew Wilcox <willy@...radead.org>,
Andi Kleen <ak@...ux.intel.com>,
Michal Hocko <mhocko@...nel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>
Subject: Re: [PATCH v3] mm/hugetlb: Defer freeing of huge pages if in
non-task context
On Tue, 17 Dec 2019, Waiman Long wrote:
>Both the hugetbl_lock and the subpool lock can be acquired in
>free_huge_page(). One way to solve the problem is to make both locks
>irq-safe. However, Mike Kravetz had learned that the hugetlb_lock is
>held for a linear scan of ALL hugetlb pages during a cgroup reparentling
>operation. So it is just too long to have irq disabled unless we can
>break hugetbl_lock down into finer-grained locks with shorter lock
>hold times.
>
>Another alternative is to defer the freeing to a workqueue job. This
>patch implements the deferred freeing by adding a free_hpage_workfn()
>work function to do the actual freeing. The free_huge_page() call in
>a non-task context saves the page to be freed in the hpage_freelist
>linked list in a lockless manner using the llist APIs.
>
>The generic workqueue is used to process the work, but a dedicated
>workqueue can be used instead if it is desirable to have the huge page
>freed ASAP.
>
>Thanks to Kirill Tkhai <ktkhai@...tuozzo.com> for suggesting the use
>of llist APIs which simplfy the code.
>
> [v2: Add more comment & remove unneeded racing check]
> [v3: Update commit log, remove pr_debug & use llist APIs]
Very creative reusing the mapping pointer, along with the llist api,
this solves the problem nicely (temporarily at least).
Two small nits below.
Acked-by: Davidlohr Bueso <dbueso@...e.de>
>Reported-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
>Signed-off-by: Waiman Long <longman@...hat.com>
>---
> mm/hugetlb.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 50 insertions(+), 1 deletion(-)
>
>diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>+static LLIST_HEAD(hpage_freelist);
>+
>+static void free_hpage_workfn(struct work_struct *work)
>+{
>+ struct llist_node *node;
>+ struct page *page;
>+
>+ node = llist_del_all(&hpage_freelist);
>+
>+ while (node) {
>+ page = container_of((struct address_space **)node,
>+ struct page, mapping);
>+ node = node->next;
llist_next()
>+ __free_huge_page(page);
>+ }
>+}
>+static DECLARE_WORK(free_hpage_work, free_hpage_workfn);
>+
>+void free_huge_page(struct page *page)
>+{
>+ /*
>+ * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
>+ */
>+ if (!in_task()) {
unlikely()?
Thanks,
Davidlohr
Powered by blists - more mailing lists