[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875y09d5d8.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Thu, 04 Jan 2024 13:39:31 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Gregory Price <gregory.price@...verge.com>
Cc: Gregory Price <gourry.memverge@...il.com>, <linux-mm@...ck.org>,
<linux-doc@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-api@...r.kernel.org>,
<x86@...nel.org>, <akpm@...ux-foundation.org>, <arnd@...db.de>,
<tglx@...utronix.de>, <luto@...nel.org>, <mingo@...hat.com>,
<bp@...en8.de>, <dave.hansen@...ux.intel.com>, <hpa@...or.com>,
<mhocko@...nel.org>, <tj@...nel.org>, <corbet@....net>,
<rakie.kim@...com>, <hyeongtak.ji@...com>, <honggyu.kim@...com>,
<vtavarespetr@...ron.com>, <peterz@...radead.org>,
<jgroves@...ron.com>, <ravis.opensrc@...ron.com>,
<sthanneeru@...ron.com>, <emirakhur@...ron.com>, <Hasan.Maruf@....com>,
<seungjun.ha@...sung.com>, Srinivasulu Thanneeru
<sthanneeru.opensrc@...ron.com>
Subject: Re: [PATCH v5 02/11] mm/mempolicy: introduce
MPOL_WEIGHTED_INTERLEAVE for weighted interleaving
Gregory Price <gregory.price@...verge.com> writes:
> On Wed, Jan 03, 2024 at 01:46:56PM +0800, Huang, Ying wrote:
>> Gregory Price <gregory.price@...verge.com> writes:
>> > I'm specifically concerned about:
>> > weighted_interleave_nid
>> > alloc_pages_bulk_array_weighted_interleave
>> >
>> > I'm unsure whether kmalloc/kfree is safe (and non-offensive) in those
>> > contexts. If kmalloc/kfree is safe fine, this problem is trivial.
>> >
>> > If not, there is no good solution to this without pre-allocating a
>> > scratch area per-task.
>>
>> You need to audit whether it's safe for all callers. I guess that you
>> need to allocate pages after calling, so you can use the same GFP flags
>> here.
>>
>
> After picking away i realized that this code is usually going to get
> called during page fault handling - duh. So kmalloc is almost never
> safe (or can fail), and we it's nasty to try to handle those errors.
Why not just OOM for allocation failure?
> Instead of doing that, I simply chose to implement the scratch space
> in the mempolicy structure
>
> mempolicy->wil.scratch_weights[MAX_NUMNODES].
>
> We eat an extra 1kb of memory in the mempolicy, but it gives us a safe
> scratch space we can use any time the task is allocating memory, and
> prevents the need for any fancy error handling. That seems like a
> perfectly reasonable tradeoff.
I don't think that this is a good idea. The weight array is temporary.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists