[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <877f2ghqaf.fsf@skywalker.in.ibm.com>
Date: Wed, 19 Apr 2017 11:50:24 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
To: Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: n-horiguchi@...jp.nec.com, akpm@...ux-foundation.org
Subject: Re: [RFC] mm/madvise: Enable (soft|hard) offline of HugeTLB pages at PGD level
Anshuman Khandual <khandual@...ux.vnet.ibm.com> writes:
> Though migrating gigantic HugeTLB pages does not sound much like real
> world use case, they can be affected by memory errors. Hence migration
> at the PGD level HugeTLB pages should be supported just to enable soft
> and hard offline use cases.
In that case do we want to isolated the entire 16GB range ? Should we
just dequeue the page from hugepage pool convert them to regular 64K
pages and then isolate the 64K that had memory error ?
>
> While allocating the new gigantic HugeTLB page, it should not matter
> whether new page comes from the same node or not. There would be very
> few gigantic pages on the system afterall, we should not be bothered
> about node locality when trying to save a big page from crashing.
>
> This introduces a new HugeTLB allocator called alloc_gigantic_page()
> which will scan over all online nodes on the system and allocate a
> single HugeTLB page.
>
-aneesh
Powered by blists - more mailing lists