[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <29cdbd0f-dbc2-1a72-15b7-55f81000fa9e@oracle.com>
Date: Tue, 16 Feb 2021 11:44:34 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Michal Hocko <mhocko@...e.com>,
Muchun Song <songmuchun@...edance.com>
Cc: Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
"Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
David Hildenbrand <david@...hat.com>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
Joao Martins <joao.m.martins@...cle.com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v15 4/8] mm: hugetlb: alloc the vmemmap
pages associated with each HugeTLB page
On 2/15/21 8:27 AM, Michal Hocko wrote:
> On Mon 15-02-21 23:36:49, Muchun Song wrote:
> [...]
>>> There shouldn't be any real reason why the memory allocation for
>>> vmemmaps, or handling vmemmap in general, has to be done from within the
>>> hugetlb lock and therefore requiring a non-sleeping semantic. All that
>>> can be deferred to a more relaxed context. If you want to make a
>>
>> Yeah, you are right. We can put the freeing hugetlb routine to a
>> workqueue. Just like I do in the previous version (before v13) patch.
>> I will pick up these patches.
>
> I haven't seen your v13 and I will unlikely have time to revisit that
> version. I just wanted to point out that the actual allocation doesn't
> have to happen from under the spinlock. There are multiple ways to go
> around that. Dropping the lock would be one of them. Preallocation
> before the spin lock is taken is another. WQ is certainly an option but
> I would take it as the last resort when other paths are not feasible.
Sorry for jumping in late, Monday was a US holiday ...
IIRC, the point of moving the vmemmap allocations under the hugetlb_lock
was just for simplicity. The idea was to modify the allocations to be
non-blocking so that allocating pages and restoring vmemmap could be done
as part of normal huge page freeing where we are holding the lock. Perhaps
that is too simplistic of an approach.
IMO, using the workque approach as done in previous patches introduces
too much complexity.
Michal did bring up the question "Do we really want to do all the vmemmap
allocation (even non-blocking) and manipulation under the hugetlb lock?
I'm thinking the answer may be no. For 1G pages, this will require 4094
calls to alloc_pages. Even with non-blocking calls this seems like a long
time.
If we are not going to do the allocations under the lock, then we will need
to either preallocate or take the workqueue approach. One complication with
preallocation is that we do not for sure we will be freeing the huge page
to buddy until we take the hugetlb_lock. This is because the decision to
free or not is based on counters protected by the lock. We could of course
check counters without the lock to guess if we will be freeing the page,
and then check again after acquiring the lock. This may not be too bad in
the case of freeing a single page, but would become more complex when doing
bulk freeing. After a little thought, the workqueue approach may even end
up simpler. However, I would suggest a very simple workqueue implementation
with non-blocking allocations. If we can not quickly get vmemmap pages,
put the page back on the hugetlb free list and treat as a surplus page.
--
Mike Kravetz
Powered by blists - more mailing lists