lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <509ef752-cc2e-3edf-5871-87f971a7bc0f@oracle.com>
Date:   Thu, 7 Jan 2021 16:52:19 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org,
        n-horiguchi@...jp.nec.com, ak@...ux.intel.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/6] mm: hugetlb: fix a race between freeing and
 dissolving the page

On 1/7/21 12:40 AM, Michal Hocko wrote:
> On Wed 06-01-21 12:58:29, Mike Kravetz wrote:
>> On 1/6/21 8:56 AM, Michal Hocko wrote:
>>> On Wed 06-01-21 16:47:36, Muchun Song wrote:
>>>> There is a race condition between __free_huge_page()
>>>> and dissolve_free_huge_page().
>>>>
>>>> CPU0:                         CPU1:
>>>>
>>>> // page_count(page) == 1
>>>> put_page(page)
>>>>   __free_huge_page(page)
>>>>                               dissolve_free_huge_page(page)
>>>>                                 spin_lock(&hugetlb_lock)
>>>>                                 // PageHuge(page) && !page_count(page)
>>>>                                 update_and_free_page(page)
>>>>                                 // page is freed to the buddy
>>>>                                 spin_unlock(&hugetlb_lock)
>>>>     spin_lock(&hugetlb_lock)
>>>>     clear_page_huge_active(page)
>>>>     enqueue_huge_page(page)
>>>>     // It is wrong, the page is already freed
>>>>     spin_unlock(&hugetlb_lock)
>>>>
>>>> The race windows is between put_page() and spin_lock() which
>>>> is in the __free_huge_page().
>>>
>>> The race window reall is between put_page and dissolve_free_huge_page.
>>> And the result is that the put_page path would clobber an unrelated page
>>> (either free or already reused page) which is quite serious.
>>> Fortunatelly pages are dissolved very rarely. I believe that user would
>>> require to be privileged to hit this by intention.
>>>
>>>> We should make sure that the page is already on the free list
>>>> when it is dissolved.
>>>
>>> Another option would be to check for PageHuge in __free_huge_page. Have
>>> you considered that rather than add yet another state? The scope of the
>>> spinlock would have to be extended. If that sounds more tricky then can
>>> we check the page->lru in the dissolve path? If the page is still
>>> PageHuge and reference count 0 then there shouldn't be many options
>>> where it can be queued, right?
>>
>> The tricky part with expanding lock scope will be the potential call to
>> hugepage_subpool_put_pages as it may also try to acquire the hugetlb_lock.
> 
> Can we rearrange the code and move hugepage_subpool_put_pages after all
> this is done? Or is there any strong reason for the particular ordering?

The reservation code is so fragile, I always get nervous when making
any changes.  However, the straight forward patch below passes some
simple testing.  The only difference I can see is that global counts
are adjusted before sub-pool counts.  This should not be an issue as
global and sub-pool counts are adjusted independently (not under the
same lock).  Allocation code checks sub-pool counts before global
counts.  So, there is a SMALL potential that a racing allocation which
previously succeeded would now fail.  I do not think this is an issue
in practice.

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3b38ea958e95..658593840212 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1395,6 +1395,11 @@ static void __free_huge_page(struct page *page)
 		(struct hugepage_subpool *)page_private(page);
 	bool restore_reserve;
 
+	spin_lock(&hugetlb_lock);
+	/* check for race with dissolve_free_huge_page/update_and_free_page */
+	if (!PageHuge(page))
+		return;
+
 	VM_BUG_ON_PAGE(page_count(page), page);
 	VM_BUG_ON_PAGE(page_mapcount(page), page);
 
@@ -1403,26 +1408,6 @@ static void __free_huge_page(struct page *page)
 	restore_reserve = PagePrivate(page);
 	ClearPagePrivate(page);
 
-	/*
-	 * If PagePrivate() was set on page, page allocation consumed a
-	 * reservation.  If the page was associated with a subpool, there
-	 * would have been a page reserved in the subpool before allocation
-	 * via hugepage_subpool_get_pages().  Since we are 'restoring' the
-	 * reservtion, do not call hugepage_subpool_put_pages() as this will
-	 * remove the reserved page from the subpool.
-	 */
-	if (!restore_reserve) {
-		/*
-		 * A return code of zero implies that the subpool will be
-		 * under its minimum size if the reservation is not restored
-		 * after page is free.  Therefore, force restore_reserve
-		 * operation.
-		 */
-		if (hugepage_subpool_put_pages(spool, 1) == 0)
-			restore_reserve = true;
-	}
-
-	spin_lock(&hugetlb_lock);
 	clear_page_huge_active(page);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
@@ -1446,6 +1431,28 @@ static void __free_huge_page(struct page *page)
 		enqueue_huge_page(h, page);
 	}
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * If PagePrivate() was set on page, page allocation consumed a
+	 * reservation.  If the page was associated with a subpool, there
+	 * would have been a page reserved in the subpool before allocation
+	 * via hugepage_subpool_get_pages().  Since we are 'restoring' the
+	 * reservtion, do not call hugepage_subpool_put_pages() as this will
+	 * remove the reserved page from the subpool.
+	 */
+	if (!restore_reserve) {
+		/*
+		 * A return code of zero implies that the subpool will be
+		 * under its minimum size if the reservation is not restored
+		 * after page is free.  Therefore, we need to add 1 to the
+		 * global reserve count.
+		 */
+		if (hugepage_subpool_put_pages(spool, 1) == 0) {
+			spin_lock(&hugetlb_lock);
+			h->resv_huge_pages++;
+			spin_unlock(&hugetlb_lock);
+		}
+	}
 }
 
 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ