[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <c495d3d1-fd72-6f27-4397-ffbed7b2b2d3@linux.vnet.ibm.com>
Date: Fri, 28 Jul 2017 11:23:34 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: Mike Kravetz <mike.kravetz@...cle.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: akpm@...ux-foundation.org
Subject: Re: [PATCH V3] mm/madvise: Enable (soft|hard) offline of HugeTLB
pages at PGD level
On 07/28/2017 06:19 AM, Mike Kravetz wrote:
> On 05/16/2017 03:05 AM, Anshuman Khandual wrote:
>> Though migrating gigantic HugeTLB pages does not sound much like real
>> world use case, they can be affected by memory errors. Hence migration
>> at the PGD level HugeTLB pages should be supported just to enable soft
>> and hard offline use cases.
>
> Hi Anshuman,
>
> Sorry for the late question, but I just stumbled on this code when
> looking at something else.
>
> It appears the primary motivation for these changes is to handle
> memory errors in gigantic pages. In this case, you migrate to
Right.
> another gigantic page. However, doesn't this assume that there is
Right.
> a pre-allocated gigantic page sitting unused that will be the target
> of the migration? alloc_huge_page_node will not allocate a gigantic
> page. Or, am I missing something?
Yes, its in the context of 16GB pages on POWER8 system where all the
gigantic pages are pre allocated from the platform and passed on to
the kernel through the device tree. We dont allocate these gigantic
pages on runtime.
Powered by blists - more mailing lists