[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d334638-2139-07a1-c999-36a1729173fb@intel.com>
Date: Wed, 28 Mar 2018 10:15:02 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, x86@...nel.org,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
Tom Lendacky <thomas.lendacky@....com>
Cc: Kai Huang <kai.huang@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCHv2 06/14] mm/page_alloc: Propagate encryption KeyID through
page allocator
On 03/28/2018 09:55 AM, Kirill A. Shutemov wrote:
> @@ -51,7 +51,7 @@ static inline struct page *new_page_nodemask(struct page *page,
> if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE))
> gfp_mask |= __GFP_HIGHMEM;
>
> - new_page = __alloc_pages_nodemask(gfp_mask, order,
> + new_page = __alloc_pages_nodemask(gfp_mask, order, page_keyid(page),
> preferred_nid, nodemask);
You're not going to like this suggestion.
Am I looking at this too superficially, or does every single site into
which you pass keyid also take a node and gfpmask and often an order? I
think you need to run this by the keepers of page_alloc.c and see if
they'd rather do something more drastic.
Powered by blists - more mailing lists