[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f880cf51-8703-444c-ac7e-b89cc5816931@redhat.com>
Date: Mon, 17 Feb 2025 20:52:31 +0100
From: David Hildenbrand <david@...hat.com>
To: Shivank Garg <shivankg@....com>, akpm@...ux-foundation.org,
willy@...radead.org, pbonzini@...hat.com
Cc: linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-coco@...ts.linux.dev, chao.gao@...el.com, seanjc@...gle.com,
ackerleytng@...gle.com, vbabka@...e.cz, bharata@....com, nikunj@....com,
michael.day@....com, Neeraj.Upadhyay@....com, thomas.lendacky@....com,
michael.roth@....com, Shivansh Dhiman <shivansh.dhiman@....com>,
baolin.wang@...ux.alibaba.com
Subject: Re: [RFC PATCH v4 1/3] mm/filemap: add mempolicy support to the
filemap layer
>
> (1) As you noted later, shmem has unique requirements due to handling swapin.
> It does considerable open-coding.
> Initially, I was considering simplifying the shmem but it was not possible due
> to above constraints.
> One option would be to add shmem's special cases in the filemap and check for
> themusing shmem_mapping()?
> But, I don't understand the shmem internals well enough to determine if it is
> feasible.
>
Okay, thanks for looking into this.
> (2) I considered handling it manually in guest_memfd like shmem does, but this
> would lead to code duplication and more open-coding in guest_memfd. The current
> approach seems cleaner.
Okay, thanks.
>
>> Two tabs indent on second parameter line, please.
>>
> ..
>>
>> This should go below the variable declaration. (and indentation on second parameter line should align with the first parameter)
>>
> ..
>> "The mempolicy to apply when allocating a new folio." ?
>>
>
> I'll address all the formatting and documentation issues in next posting.
>
>>
>> For guest_memfd, where pages are un-movable and un-swappable, the memory policy will never change later.
>>
>> shmem seems to handle the swap-in case, because it keeps care of allocating pages in that case itself.
>>
>> For ordinary pagecache pages (movable), page migration would likely not be aware of the specified mpol; I assume the same applies to shmem?
>>
>> alloc_migration_target() seems to prefer the current nid (nid = folio_nid(src)), but apart from that, does not lookup any mempolicy.
>
> Page migration does handle the NUMA mempolicy using mtc (struct migration_target_control *)
> which takes node ID input and allocates on the "preferred" node id.
> The target node in migrate_misplaced_folio() is obtained using get_vma_policy(), so the
> per-VMA policy handles proper node placement for mapped pages.
> It use current nid (folio_nid(src)) only if NUMA_NO_NODE is passed.
>
> mempolicy.c provides the alloc_migration_target_by_mpol() that allocates according to
> NUMA mempolicy, which is used by do_mbind().
>
>>
>> compaction likely handles this by comapcting within a node/zone.
>>
>> Maybe migration to the right target node on misplacement is handled on a higher level lagter (numa hinting faults -> migrate_misplaced_folio). Likely at least for anon memory, not sure about unmapped shmem.
>
> Yes.
Thanks, LGTM.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists