[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <963ba9b4-6ddf-39bc-85cf-2feef542029d@nvidia.com>
Date: Fri, 16 Dec 2022 14:56:47 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Sidhartha Kumar <sidhartha.kumar@...cle.com>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<songmuchun@...edance.com>, <mike.kravetz@...cle.com>,
<willy@...radead.org>
Subject: Re: [PATCH mm-unstable] mm: move folio_set_compound_order() to
mm/internal.h
On 12/16/22 14:27, Andrew Morton wrote:
> On Tue, 13 Dec 2022 13:20:53 -0800 Sidhartha Kumar <sidhartha.kumar@...cle.com> wrote:
>
>> folio_set_compound_order() is moved to an mm-internal location so external
>> folio users cannot misuse this function. Change the name of the function
>> to folio_set_order() and use WARN_ON_ONCE() rather than BUG_ON. Also,
>> handle the case if a non-large folio is passed and add clarifying comments
>> to the function.
>>
>
> This differs from the version I previously merged:
>
> --- a/mm/internal.h~mm-move-folio_set_compound_order-to-mm-internalh-update
> +++ a/mm/internal.h
> @@ -384,8 +384,10 @@ int split_free_page(struct page *free_pa
> */
> static inline void folio_set_order(struct folio *folio, unsigned int order)
> {
> - if (WARN_ON_ONCE(!folio_test_large(folio)))
> + if (!folio_test_large(folio)) {
> + WARN_ON_ONCE(order);
> return;
> + }
I think that's out of date?
We eventually settled on the version that is (as of this a few minutes
ago) already in mm-unstable (commit fdea060a130d: "mm: move
folio_set_compound_order() to mm/internal.h"), which has it like this:
if (WARN_ON_ONCE(!folio_test_large(folio)))
return;
>
> folio->_folio_order = order;
> #ifdef CONFIG_64BIT
>
> Makes sense. But wouldn't
>
> if (WARN_ON_ONCE(order && !folio_test_large(folio)))
>
> be clearer?
That's a little narrower of a check. But maybe that's desirable. Could
someone (Mike, Muchun, Sidhartha) comment on which behavior is
preferable, please? I think I'm a little dizzy at this point. :)
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists