[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <255facf6-1d44-44eb-9d7e-5abf13f54499@huawei.com>
Date: Tue, 19 Sep 2023 07:59:18 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Matthew Wilcox <willy@...radead.org>
CC: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <ying.huang@...el.com>,
<david@...hat.com>, Zi Yan <ziy@...dia.com>,
Mike Kravetz <mike.kravetz@...cle.com>, <hughd@...gle.com>
Subject: Re: [PATCH 0/6] mm: convert numa balancing functions to use a folio
On 2023/9/18 20:57, Matthew Wilcox wrote:
> On Mon, Sep 18, 2023 at 06:32:07PM +0800, Kefeng Wang wrote:
>> The do_numa_pages only handle non-compound page, and only PMD-mapped THP
>> is handled in do_huge_pmd_numa_page(), but large, PTE-mapped folio will
>> be supported, let's convert more numa balancing functions to use/take a
>> folio in preparation for that, no functional change intended for now.
>>
>> Kefeng Wang (6):
>> sched/numa, mm: make numa migrate functions to take a folio
>> mm: mempolicy: make mpol_misplaced() to take a folio
>> mm: memory: make numa_migrate_prep() to take a folio
>> mm: memory: use a folio in do_numa_page()
>> mm: memory: add vm_normal_pmd_folio()
>> mm: huge_memory: use a folio in do_huge_pmd_numa_page()
>
> This all seems OK. It's kind of hard to review though because you change
> the same line multiple times. I think it works out better to go top-down
> instead of bottom-up. That is, start with do_numa_page() and pass
> &folio->page to numa_migrate_prep. Then do vm_normal_pmd_folio() followed
> by do_huge_pmd_numa_page(). Fourth would have been numa_migrate_prep(),
> etc. I don't want to ask you to redo the entire series, but for future
> patch series.
>
> Also, it's nce to do things like remove the unnecessary 'extern' from
> function declarations when you change them from page to folio. And
> please try to stick to 80 columns; I know it's not always easy/possible.
>
Thanks for your review and suggestion, I will keep them in mind when
sending new patch, thanks.
Powered by blists - more mailing lists