[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZqwKuzfAs7pvdHAN@x1n>
Date: Thu, 1 Aug 2024 18:22:51 -0400
From: Peter Xu <peterx@...hat.com>
To: James Houghton <jthoughton@...gle.com>
Cc: kalyazin@...zon.com, Marc Zyngier <maz@...nel.org>,
Oliver Upton <oliver.upton@...ux.dev>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
Sean Christopherson <seanjc@...gle.com>,
Shuah Khan <shuah@...nel.org>,
Axel Rasmussen <axelrasmussen@...gle.com>,
David Matlack <dmatlack@...gle.com>, kvm@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
roypat@...zon.co.uk, Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH 14/18] KVM: Add asynchronous userfaults,
KVM_READ_USERFAULT
On Mon, Jul 29, 2024 at 02:09:16PM -0700, James Houghton wrote:
> > A more general question is, it looks like Userfaultfd's main purpose was
> > to support the postcopy use case [2], yet it fails to do that
> > efficiently for large VMs. Would it be ideologically better to try to
> > improve Userfaultfd's performance (similar to how it was attempted in
> > [3]) or is that something you have already looked into and reached a
> > dead end as a part of [4]?
>
> My end goal with [4] was to take contention out of the vCPU +
> userfault path completely (so, if we are taking a lock exclusively, we
> are the only one taking it). I came to the conclusion that the way to
> do this that made the most sense was Anish's memory fault exits idea.
> I think it's possible to make userfaults scale better themselves, but
> it's much more challenging than the memory fault exits approach for
> KVM (and I don't have a good way to do it in mind).
>
> > [1] https://lore.kernel.org/lkml/4AEFB823.4040607@redhat.com/T/
> > [2] https://lwn.net/Articles/636226/
> > [3] https://lore.kernel.org/lkml/20230905214235.320571-1-peterx@redhat.com/
> > [4]
> > https://lore.kernel.org/linux-mm/CADrL8HVDB3u2EOhXHCrAgJNLwHkj2Lka1B_kkNb0dNwiWiAN_Q@mail.gmail.com/
Thanks for the link here on [3]. Just to mention I still remember I have
more thoughts on userfault-generic optimizations on top of this one at that
time, like >1 queues rather than one. Maybe that could also help, maybe
not.
Even with that I think it'll be less-scalable than vcpu exits for
sure.. but still, I am always not yet convinced those "speed" are extremely
necessary, because postcopy overhead should be page movements, IMHO. Maybe
there's scalability on the locks with userfault right now, but maybe that's
fixable?
I'm not sure whether I'm right, but IMHO the perf here isn't the critical
part. Now IMHO it's about guest_memfd is not aligned to how userfault is
defined (with a mapping first, if without fd-extension), I think it indeed
can make sense, or say, have choice on implementing that in KVM if that's
easier. So maybe other things besides the perf point here matters more.
Thanks,
--
Peter Xu
Powered by blists - more mailing lists