[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADrL8HVZxoO33F2UJBoTjF_SXpxyZmH=RTM5G3stgo_kRPjazA@mail.gmail.com>
Date: Wed, 29 May 2024 20:27:45 -0700
From: James Houghton <jthoughton@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Paolo Bonzini <pbonzini@...hat.com>,
Albert Ou <aou@...s.berkeley.edu>, Ankit Agrawal <ankita@...dia.com>,
Anup Patel <anup@...infault.org>, Atish Patra <atishp@...shpatra.org>,
Axel Rasmussen <axelrasmussen@...gle.com>, Bibo Mao <maobibo@...ngson.cn>,
Catalin Marinas <catalin.marinas@....com>, David Matlack <dmatlack@...gle.com>,
David Rientjes <rientjes@...gle.com>, Huacai Chen <chenhuacai@...nel.org>,
James Morse <james.morse@....com>, Jonathan Corbet <corbet@....net>, Marc Zyngier <maz@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>, Nicholas Piggin <npiggin@...il.com>,
Oliver Upton <oliver.upton@...ux.dev>, Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>, Raghavendra Rao Ananta <rananta@...gle.com>,
Ryan Roberts <ryan.roberts@....com>, Shaoqin Huang <shahuang@...hat.com>,
Shuah Khan <shuah@...nel.org>, Suzuki K Poulose <suzuki.poulose@....com>,
Tianrui Zhao <zhaotianrui@...ngson.cn>, Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Zenghui Yu <yuzenghui@...wei.com>, kvm-riscv@...ts.infradead.org, kvm@...r.kernel.org,
kvmarm@...ts.linux.dev, linux-arm-kernel@...ts.infradead.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-mips@...r.kernel.org,
linux-mm@...ck.org, linux-riscv@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, loongarch@...ts.linux.dev
Subject: Re: [PATCH v4 4/7] KVM: Move MMU lock acquisition for
test/clear_young to architecture
On Wed, May 29, 2024 at 2:55 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Wed, May 29, 2024, James Houghton wrote:
> > For implementation mmu_notifier_{test,clear}_young, the KVM memslot
> > walker used to take the MMU lock for us. Now make the architectures
> > take it themselves.
>
> Hmm, *forcing* architectures to take mmu_lock is a step backwards. Rather than
> add all of this churn, what about adding CONFIG_KVM_MMU_NOTIFIER_LOCKLESS, e.g.
>
> static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn,
> unsigned long start,
> unsigned long end,
> gfn_handler_t handler)
> {
> struct kvm *kvm = mmu_notifier_to_kvm(mn);
> const struct kvm_mmu_notifier_range range = {
> .start = start,
> .end = end,
> .handler = handler,
> .on_lock = (void *)kvm_null_fn,
> .flush_on_ret = false,
> .may_block = false,
> .lockless = IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_LOCKLESS),
> };
>
> return __kvm_handle_hva_range(kvm, &range).ret;
> }
Thanks Sean, yes this is a lot better. I will do this for v5.
Powered by blists - more mailing lists