[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d45de441-ff6f-463b-97d1-49341852dd14@linux.alibaba.com>
Date: Wed, 6 Mar 2024 16:58:23 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: akpm@...ux-foundation.org, muchun.song@...ux.dev, david@...hat.com,
linmiaohe@...wei.com, naoya.horiguchi@....com, mhocko@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] mm: hugetlb: make the hugetlb migration strategy
consistent
On 2024/3/6 16:46, Oscar Salvador wrote:
> On Wed, Mar 06, 2024 at 04:35:26PM +0800, Baolin Wang wrote:
>> On 2024/2/28 16:41, Oscar Salvador wrote:
>
>>> if (folio_test_hugetlb(src)) {
>>> struct hstate *h = folio_hstate(src);
>>> + bool allow_fallback = false;
>>> +
>>> + if ((1UL << reason) & HTLB_ALLOW_FALLBACK)
>>> + allow_fallback = true;
>>
>> IMHO, users also should not be aware of these hugetlb logics.
>
> Note that what I wrote there was ugly, because it was just a PoC.
>
> It could be a helper e.g:
>
> if (hugetlb_reason_allow_alloc_fallback(reason)) (or whatever)
> allow_fallback_alloc = true
>
>>>
>>> gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
>>> return alloc_hugetlb_folio_nodemask(h, nid,
>>> - mtc->nmask, gfp_mask);
>>> + mtc->nmask, gfp_mask,
>>> + allow_fallback);
>>
>> 'allow_fallback' can be confusing, that means it is 'allow_fallback' for a
>> new temporary hugetlb allocation, but not 'allow_fallback' for an available
>> hugetlb allocation in alloc_hugetlb_folio_nodemask().
>
> Well, you can pick "alloc_fallback_on_alloc" which is more descriptive I
> guess.
Seems better.
>
> Bottomline line is that I do not think that choosing to allow
> fallbacking or not here is spreading more logic than having the
> htlb_modify_alloc_mask() here and not directly in
> alloc_hugetlb_folio_nodemask().
>
> As I said, code-wise looks fine, it is just that having to pass
> the 'reason' all the way down and making the decision there makes
> me go "meh..".
Well, fair enough:) let me respin it. Thanks.
Powered by blists - more mailing lists