lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210108082858.GV13207@dhcp22.suse.cz>
Date:   Fri, 8 Jan 2021 09:28:58 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Muchun Song <songmuchun@...edance.com>, akpm@...ux-foundation.org,
        n-horiguchi@...jp.nec.com, ak@...ux.intel.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/6] mm: hugetlb: fix a race between freeing and
 dissolving the page

On Thu 07-01-21 16:52:19, Mike Kravetz wrote:
> On 1/7/21 12:40 AM, Michal Hocko wrote:
> > On Wed 06-01-21 12:58:29, Mike Kravetz wrote:
> >> On 1/6/21 8:56 AM, Michal Hocko wrote:
> >>> On Wed 06-01-21 16:47:36, Muchun Song wrote:
> >>>> There is a race condition between __free_huge_page()
> >>>> and dissolve_free_huge_page().
> >>>>
> >>>> CPU0:                         CPU1:
> >>>>
> >>>> // page_count(page) == 1
> >>>> put_page(page)
> >>>>   __free_huge_page(page)
> >>>>                               dissolve_free_huge_page(page)
> >>>>                                 spin_lock(&hugetlb_lock)
> >>>>                                 // PageHuge(page) && !page_count(page)
> >>>>                                 update_and_free_page(page)
> >>>>                                 // page is freed to the buddy
> >>>>                                 spin_unlock(&hugetlb_lock)
> >>>>     spin_lock(&hugetlb_lock)
> >>>>     clear_page_huge_active(page)
> >>>>     enqueue_huge_page(page)
> >>>>     // It is wrong, the page is already freed
> >>>>     spin_unlock(&hugetlb_lock)
> >>>>
> >>>> The race windows is between put_page() and spin_lock() which
> >>>> is in the __free_huge_page().
> >>>
> >>> The race window reall is between put_page and dissolve_free_huge_page.
> >>> And the result is that the put_page path would clobber an unrelated page
> >>> (either free or already reused page) which is quite serious.
> >>> Fortunatelly pages are dissolved very rarely. I believe that user would
> >>> require to be privileged to hit this by intention.
> >>>
> >>>> We should make sure that the page is already on the free list
> >>>> when it is dissolved.
> >>>
> >>> Another option would be to check for PageHuge in __free_huge_page. Have
> >>> you considered that rather than add yet another state? The scope of the
> >>> spinlock would have to be extended. If that sounds more tricky then can
> >>> we check the page->lru in the dissolve path? If the page is still
> >>> PageHuge and reference count 0 then there shouldn't be many options
> >>> where it can be queued, right?
> >>
> >> The tricky part with expanding lock scope will be the potential call to
> >> hugepage_subpool_put_pages as it may also try to acquire the hugetlb_lock.
> > 
> > Can we rearrange the code and move hugepage_subpool_put_pages after all
> > this is done? Or is there any strong reason for the particular ordering?
> 
> The reservation code is so fragile, I always get nervous when making
> any changes.  However, the straight forward patch below passes some
> simple testing.  The only difference I can see is that global counts
> are adjusted before sub-pool counts.  This should not be an issue as
> global and sub-pool counts are adjusted independently (not under the
> same lock).  Allocation code checks sub-pool counts before global
> counts.  So, there is a SMALL potential that a racing allocation which
> previously succeeded would now fail.  I do not think this is an issue
> in practice.
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 3b38ea958e95..658593840212 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1395,6 +1395,11 @@ static void __free_huge_page(struct page *page)
>  		(struct hugepage_subpool *)page_private(page);
>  	bool restore_reserve;
>  
> +	spin_lock(&hugetlb_lock);
> +	/* check for race with dissolve_free_huge_page/update_and_free_page */
> +	if (!PageHuge(page))
> +		return;
> +

This really wants to unlock the lock, right? But this is indeed what
I've had in mind.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ