[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ef688903-ff49-ffeb-1f95-ef995942d5dc@redhat.com>
Date: Fri, 2 Aug 2019 11:19:26 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Anup Patel <Anup.Patel@....com>,
Palmer Dabbelt <palmer@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Radim K <rkrcmar@...hat.com>
Cc: Daniel Lezcano <daniel.lezcano@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Atish Patra <Atish.Patra@....com>,
Alistair Francis <Alistair.Francis@....com>,
Damien Le Moal <Damien.LeMoal@....com>,
Christoph Hellwig <hch@...radead.org>,
Anup Patel <anup@...infault.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 11/19] RISC-V: KVM: Implement VMID allocator
On 02/08/19 09:48, Anup Patel wrote:
> +struct kvm_vmid {
> + unsigned long vmid_version;
> + unsigned long vmid;
> +};
> +
Please document that both fields are written under vmid_lock, and read
outside it.
> + /*
> + * On SMP we know no other CPUs can use this CPU's or
> + * each other's VMID after forced exit returns since the
> + * vmid_lock blocks them from re-entry to the guest.
> + */
> + force_exit_and_guest_tlb_flush(cpu_all_mask);
Please use kvm_flush_remote_tlbs(kvm) instead. All you need to do to
support it is check for KVM_REQ_TLB_FLUSH and handle it by calling
__kvm_riscv_hfence_gvma_all. Also, since your spinlock is global you
probably should release it around the call to kvm_flush_remote_tlbs.
(Think of an implementation that has a very small number of VMID bits).
> + if (unlikely(vmid_next == 0)) {
> + WRITE_ONCE(vmid_version, READ_ONCE(vmid_version) + 1);
> + vmid_next = 1;
> + /*
> + * On SMP we know no other CPUs can use this CPU's or
> + * each other's VMID after forced exit returns since the
> + * vmid_lock blocks them from re-entry to the guest.
> + */
> + force_exit_and_guest_tlb_flush(cpu_all_mask);
> + }
> +
> + vmid->vmid = vmid_next;
> + vmid_next++;
> + vmid_next &= (1 << vmid_bits) - 1;
> +
> + /* Ensure VMID next update is completed */
> + smp_wmb();
This barrier is not necessary. Writes to vmid->vmid need not be ordered
with writes to vmid->vmid_version, because the accesses happen in
completely different places.
(As a rule of thumb, each smp_wmb() should have a matching smp_rmb()
somewhere, and this one doesn't).
Paolo
> + WRITE_ONCE(vmid->vmid_version, READ_ONCE(vmid_version));
> +
Powered by blists - more mailing lists