[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75ce83cf-8313-f2da-03f3-b407e262d1fc@oracle.com>
Date: Sun, 21 Mar 2021 12:55:25 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Michal Hocko <mhocko@...e.com>, Shakeel Butt <shakeelb@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>,
David Rientjes <rientjes@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Peter Zijlstra <peterz@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
HORIGUCHI NAOYA <naoya.horiguchi@....com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Waiman Long <longman@...hat.com>, Peter Xu <peterx@...hat.com>,
Mina Almasry <almasrymina@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 6/8] hugetlb: make free_huge_page irq safe
On 3/19/21 3:42 PM, Mike Kravetz wrote:
> Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
> non-task context") was added to address the issue of free_huge_page
> being called from irq context. That commit hands off free_huge_page
> processing to a workqueue if !in_task. However, as seen in [1] this
> does not cover all cases. Instead, make the locks taken in the
> free_huge_page irq safe.
>
> This patch does the following:
> - Make hugetlb_lock irq safe. This is mostly a simple process of
> changing spin_*lock calls to spin_*lock_irq* calls.
> - Make subpool lock irq safe in a similar manner.
> - Revert the !in_task check and workqueue handoff.
>
> [1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@google.com/
>
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> mm/hugetlb.c | 206 ++++++++++++++++++++------------------------
> mm/hugetlb_cgroup.c | 10 ++-
> 2 files changed, 100 insertions(+), 116 deletions(-)
I missed the following changes:
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5efff5ce337f..13d77d94d185 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2803,10 +2803,10 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
break;
/* Drop lock as free routines may sleep */
- spin_unlock(&hugetlb_lock);
+ spin_unlock_irqrestore(&hugetlb_lock, flags);
update_and_free_page(h, page);
cond_resched();
- spin_lock(&hugetlb_lock);
+ spin_lock_irqsave(&hugetlb_lock, flags);
/* Recompute min_count in case hugetlb_lock was dropped */
min_count = min_hp_count(h, count);
Powered by blists - more mailing lists