lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d9dc54e8-080f-4dc3-b13e-b65248c25a56@amd.com>
Date: Fri, 8 Nov 2024 14:51:12 +0530
From: Shivank Garg <shivankg@....com>
To: Matthew Wilcox <willy@...radead.org>
Cc: x86@...nel.org, viro@...iv.linux.org.uk, brauner@...nel.org,
 jack@...e.cz, akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
 linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
 linux-api@...r.kernel.org, linux-arch@...r.kernel.org, kvm@...r.kernel.org,
 chao.gao@...el.com, pgonda@...gle.com, thomas.lendacky@....com,
 seanjc@...gle.com, luto@...nel.org, tglx@...utronix.de, mingo@...hat.com,
 bp@...en8.de, dave.hansen@...ux.intel.com, arnd@...db.de,
 pbonzini@...hat.com, kees@...nel.org, bharata@....com, nikunj@....com,
 michael.day@....com, Neeraj.Upadhyay@....com, linux-coco@...ts.linux.dev
Subject: Re: [RFC PATCH 0/4] Add fbind() and NUMA mempolicy support for KVM
 guest_memfd



On 11/7/2024 8:40 PM, Matthew Wilcox wrote:
> On Thu, Nov 07, 2024 at 02:24:20PM +0530, Shivank Garg wrote:
>> The folio allocation path from guest_memfd typically looks like this...
>>
>> kvm_gmem_get_folio
>>   filemap_grab_folio
>>     __filemap_get_folio
>>       filemap_alloc_folio
>>         __folio_alloc_node_noprof
>>           -> goes to the buddy allocator
>>
>> Hence, I am trying to have a version of filemap_alloc_folio() that takes an mpol.
> 
> It only takes that path if cpuset_do_page_mem_spread() is true.  Is the
> real problem that you're trying to solve that cpusets are being used
> incorrectly?
> 
> Backing up, it seems like you want to make a change to the page cache,
> you've had a long discussion with people who aren't the page cache
> maintainer, and you all understand the pros and cons of everything,
> and here you are dumping a solution on me without talking to me, even
> though I was at Plumbers, you didn't find me to tell me I needed to go
> to your talk.
> 
> So you haven't explained a damned thing to me, and I'm annoyed at you.
> Do better.  Starting with your cover letter.

Hi Matthew,

I apologize for any misunderstanding and not providing adequate context.

To clarify:
- You may recall this work from its earlier iteration as an
  IOCTL-based approach, where you provided valuable review comments [1].
- I was not physically present at LPC. The discussion happened through
  the mailing list [2] and lobby discussion with my colleagues who visited
  Vienna.
- Based on feedback, particularly regarding the suggestion to consider
  fbind() as a more generic solution, we shifted to the current approach.

I posted this as *RFC* specifically to gather feedback on the feasibility of
this approach and to ensure I'm heading in the right direction.

Would you be willing to help me understand:
1. What additional information would be helpful to you and other reviewers?
2. How cpusets can be used correctly to fix this? (your point on
   cpuset_do_page_mem_spread() is interesting and I'll investigate it more
   thoroughly to understand).

I'll work on improving the cover letter to better explain the problem space
and proposed solution.

Thank you for the valuable feedback.

[1] https://lore.kernel.org/linux-mm/ZuimLtrpv1dXczf5@casper.infradead.org
[2] https://lore.kernel.org/linux-mm/ZvEga7srKhympQBt@intel.com

Best regards,
Shivank

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ