[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10ffac79-0dba-4c30-991e-f3ca2b5ff639@redhat.com>
Date: Fri, 8 Nov 2024 18:31:47 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Matthew Wilcox <willy@...radead.org>, Shivank Garg <shivankg@....com>
Cc: x86@...nel.org, viro@...iv.linux.org.uk, brauner@...nel.org,
jack@...e.cz, akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
linux-api@...r.kernel.org, linux-arch@...r.kernel.org, kvm@...r.kernel.org,
chao.gao@...el.com, pgonda@...gle.com, thomas.lendacky@....com,
seanjc@...gle.com, luto@...nel.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, dave.hansen@...ux.intel.com, arnd@...db.de, kees@...nel.org,
bharata@....com, nikunj@....com, michael.day@....com,
Neeraj.Upadhyay@....com, linux-coco@...ts.linux.dev,
Linux API <linux-api@...r.kernel.org>
Subject: Re: [RFC PATCH 0/4] Add fbind() and NUMA mempolicy support for KVM
guest_memfd
On 11/7/24 16:10, Matthew Wilcox wrote:
> On Thu, Nov 07, 2024 at 02:24:20PM +0530, Shivank Garg wrote:
>> The folio allocation path from guest_memfd typically looks like this...
>>
>> kvm_gmem_get_folio
>> filemap_grab_folio
>> __filemap_get_folio
>> filemap_alloc_folio
>> __folio_alloc_node_noprof
>> -> goes to the buddy allocator
>>
>> Hence, I am trying to have a version of filemap_alloc_folio() that takes an mpol.
>
> It only takes that path if cpuset_do_page_mem_spread() is true. Is the
> real problem that you're trying to solve that cpusets are being used
> incorrectly?
If it's false it's not very different, it goes to alloc_pages_noprof().
Then it respects the process's policy, but the policy is not
customizable without mucking with state that is global to the process.
Taking a step back: the problem is that a VM can be configured to have
multiple guest-side NUMA nodes, each of which will pick memory from the
right NUMA node in the host. Without a per-file operation it's not
possible to do this on guest_memfd. The discussion was whether to use
ioctl() or a new system call. The discussion ended with the idea of
posting a *proposal* asking for *comments* as to whether the system call
would be useful in general beyond KVM.
Commenting on the system call itself I am not sure I like the
file_operations entry, though I understand that it's the simplest way to
implement this in an RFC series. It's a bit surprising that fbind() is
a total no-op for everything except KVM's guest_memfd.
Maybe whatever you pass to fbind() could be stored in the struct file *,
and used as the default when creating VMAs; as if every mmap() was
followed by an mbind(), except that it also does the right thing with
MAP_POPULATE for example. Or maybe that's a horrible idea?
Adding linux-api to get input; original thread is at
https://lore.kernel.org/kvm/20241105164549.154700-1-shivankg@amd.com/.
Paolo
> Backing up, it seems like you want to make a change to the page cache,
> you've had a long discussion with people who aren't the page cache
> maintainer, and you all understand the pros and cons of everything,
> and here you are dumping a solution on me without talking to me, even
> though I was at Plumbers, you didn't find me to tell me I needed to go
> to your talk.
>
> So you haven't explained a damned thing to me, and I'm annoyed at you.
> Do better. Starting with your cover letter.
>
Powered by blists - more mailing lists