[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <d3189584-4ddd-53b8-f412-57e378dbf7ca@linux.vnet.ibm.com>
Date: Wed, 19 Apr 2017 12:12:17 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: n-horiguchi@...jp.nec.com, akpm@...ux-foundation.org
Subject: Re: [RFC] mm/madvise: Enable (soft|hard) offline of HugeTLB pages at
PGD level
On 04/19/2017 11:50 AM, Aneesh Kumar K.V wrote:
> Anshuman Khandual <khandual@...ux.vnet.ibm.com> writes:
>
>> Though migrating gigantic HugeTLB pages does not sound much like real
>> world use case, they can be affected by memory errors. Hence migration
>> at the PGD level HugeTLB pages should be supported just to enable soft
>> and hard offline use cases.
>
> In that case do we want to isolated the entire 16GB range ? Should we
> just dequeue the page from hugepage pool convert them to regular 64K
> pages and then isolate the 64K that had memory error ?
Though its a better thing to do, assuming that we can actually dequeue
the huge page and push it to the buddy allocator as normal 64K pages
(need to check on this as the original allocation happened from the
memblock instead of the buddy allocator, guess it should be possible
given that we do similar stuff during memory hot plug). In that case
we will also have to consider the same for the PMD based HugeTLB pages
as well or it should be only for these gigantic huge pages ?
Powered by blists - more mailing lists