lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7d90d58-fa6a-7fa1-77c9-a08515746018@oracle.com>
Date:   Mon, 22 Mar 2021 16:07:29 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Shakeel Butt <shakeelb@...gle.com>,
        Oscar Salvador <osalvador@...e.de>,
        David Hildenbrand <david@...hat.com>,
        Muchun Song <songmuchun@...edance.com>,
        David Rientjes <rientjes@...gle.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Matthew Wilcox <willy@...radead.org>,
        HORIGUCHI NAOYA <naoya.horiguchi@....com>,
        "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
        Waiman Long <longman@...hat.com>, Peter Xu <peterx@...hat.com>,
        Mina Almasry <almasrymina@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 2/8] hugetlb: recompute min_count when dropping
 hugetlb_lock

On 3/22/21 7:07 AM, Michal Hocko wrote:
> On Fri 19-03-21 15:42:03, Mike Kravetz wrote:
>> The routine set_max_huge_pages reduces the number of hugetlb_pages,
>> by calling free_pool_huge_page in a loop.  It does this as long as
>> persistent_huge_pages() is above a calculated min_count value.
>> However, this loop can conditionally drop hugetlb_lock and in some
>> circumstances free_pool_huge_page can drop hugetlb_lock.  If the
>> lock is dropped, counters could change the calculated min_count
>> value may no longer be valid.
> 
> OK, this one looks like a real bug fix introduced by 55f67141a8927.
> Unless I am missing something we could release pages which are reserved
> already.
>  
>> The routine try_to_free_low has the same issue.
>>
>> Recalculate min_count in each loop iteration as hugetlb_lock may have
>> been dropped.
>>
>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>> ---
>>  mm/hugetlb.c | 25 +++++++++++++++++++++----
>>  1 file changed, 21 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index d5be25f910e8..c537274c2a38 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2521,11 +2521,20 @@ static void __init report_hugepages(void)
>>  	}
>>  }
>>  
>> +static inline unsigned long min_hp_count(struct hstate *h, unsigned long count)
>> +{
>> +	unsigned long min_count;
>> +
>> +	min_count = h->resv_huge_pages + h->nr_huge_pages - h->free_huge_pages;
>> +	return max(count, min_count);
> 
> Just out of curiousity, is compiler allowed to inline this piece of code
> and then cache the value? In other words do we need to make these
> READ_ONCE or otherwise enforce the no-caching behavior?

I honestly do not know if the compiler is allowed to do that.  The
assembly code generated by my compiler does not cache the value, but
that does not guarantee anything.  I can add READ_ONCE to make the
function look something like:

static inline unsigned long min_hp_count(struct hstate *h, unsigned long count)
{
	unsigned long min_count;

	min_count = READ_ONCE(h->resv_huge_pages) + READ_ONCE(h->nr_huge_pages)
					- READ_ONCE(h->free_huge_pages);
	return max(count, min_count);
}

-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ