lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 21 Jun 2024 11:16:31 +0100
From: Fuad Tabba <tabba@...gle.com>
To: David Hildenbrand <david@...hat.com>
Cc: David Rientjes <rientjes@...gle.com>, Sean Christopherson <seanjc@...gle.com>, 
	Jason Gunthorpe <jgg@...dia.com>, John Hubbard <jhubbard@...dia.com>, 
	Elliot Berman <quic_eberman@...cinc.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Shuah Khan <shuah@...nel.org>, Matthew Wilcox <willy@...radead.org>, maz@...nel.org, 
	kvm@...r.kernel.org, linux-arm-msm@...r.kernel.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	pbonzini@...hat.com
Subject: Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning

Hi David,

On Fri, Jun 21, 2024 at 10:10 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 21.06.24 10:54, Fuad Tabba wrote:
> > Hi David,
> >
> > On Fri, Jun 21, 2024 at 9:44 AM David Hildenbrand <david@...hat.com> wrote:
> >>
> >>>> Again from that thread, one of most important aspects guest_memfd is that VMAs
> >>>> are not required.  Stating the obvious, lack of VMAs makes it really hard to drive
> >>>> swap, reclaim, migration, etc. from code that fundamentally operates on VMAs.
> >>>>
> >>>>    : More broadly, no VMAs are required.  The lack of stage-1 page tables are nice to
> >>>>    : have; the lack of VMAs means that guest_memfd isn't playing second fiddle, e.g.
> >>>>    : it's not subject to VMA protections, isn't restricted to host mapping size, etc.
> >>>>
> >>>> [1] https://lore.kernel.org/all/Zfmpby6i3PfBEcCV@google.com
> >>>> [2] https://lore.kernel.org/all/Zg3xF7dTtx6hbmZj@google.com
> >>>
> >>> I wonder if it might be more productive to also discuss this in one of
> >>> the PUCKs, ahead of LPC, in addition to trying to go over this in LPC.
> >>
> >> I don't know in  which context you usually discuss that, but I could
> >> propose that as a topic in the bi-weekly MM meeting.
> >>
> >> This would, of course, be focused on the bigger MM picture: how to mmap,
> >> how how to support huge pages, interaction with page pinning, ... So
> >> obviously more MM focused once we are in agreement that we want to
> >> support shared memory in guest_memfd and how to make that work with core-mm.
> >>
> >> Discussing if we want shared memory in guest_memfd might be betetr
> >> suited for a different, more CC/KVM specific meeting (likely the "PUCKs"
> >> mentioned here?).
> >
> > Sorry, I should have given more context on what a PUCK* is :) It's a
> > periodic (almost weekly) upstream call for KVM.
> >
> > [*] https://lore.kernel.org/all/20230512231026.799267-1-seanjc@google.com/
> >
> > But yes, having a discussion in one of the mm meetings ahead of LPC
> > would also be great. When do these meetings usually take place, to try
> > to coordinate across timezones.
>
> It's Wednesday, 9:00 - 10:00am PDT (GMT-7) every second week.
>
> If we're in agreement, we could (assuming there are no other planned
> topics) either use the slot next week (June 26) or the following one
> (July 10).
>
> Selfish as I am, I would prefer July 10, because I'll be on vacation
> next week and there would be little time to prepare.
>
> @David R., heads up that this might become a topic ("shared and private
> memory in guest_memfd: mmap, pinning and huge pages"), if people here
> agree that this is a direction worth heading.

Thanks for the invite! Tentatively July 10th works for me, but I'd
like to talk to the others who might be interested (pKVM, Gunyah, and
others) to see if that works for them. I'll get back to you shortly.

Cheers,
/fuad

> --
> Cheers,
>
> David / dhildenb
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ