[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f1f3c92-725f-0cc9-3dc8-420c4e9c96ec@oracle.com>
Date: Wed, 24 Mar 2021 09:43:51 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Peter Zijlstra <peterz@...radead.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Shakeel Butt <shakeelb@...gle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Muchun Song <songmuchun@...edance.com>,
David Rientjes <rientjes@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Matthew Wilcox <willy@...radead.org>,
HORIGUCHI NAOYA <naoya.horiguchi@....com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Waiman Long <longman@...hat.com>, Peter Xu <peterx@...hat.com>,
Mina Almasry <almasrymina@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 2/8] hugetlb: recompute min_count when dropping
hugetlb_lock
On 3/24/21 1:36 AM, Michal Hocko wrote:
> On Tue 23-03-21 16:18:08, Mike Kravetz wrote:
> [...]
>> Here is another thought.
>> In patch 5 you suggest removing all pages from hugetlb with the lock
>> held, and adding them to a list. Then, drop the lock and free all
>> pages on the list. If we do this, then the value computed here (min_count)
>> can not change while we are looping. So, this patch would be unnecessary.
>> That is another argument in favor of batching the frees.
>>
>> Unless there is something wrong in my thinking, I am going to take that
>> approach and drop this patch.
>
> Makes sense
>
I still think this is the way to go in this series.
However, Muchun's "Free some vmemmap pages of HugeTLB page" series would
likely want to drop the lock for each page as the free operation may
fail. So, we may end up back with one lock cycle per page. That is
something that will be discussed in that series.
--
Mike Kravetz
Powered by blists - more mailing lists