[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAhSdy2y+DfV0b7dG_V43AL_MVz2R+LzEsE0y8YuiJY_EBeabg@mail.gmail.com>
Date: Mon, 5 Aug 2019 15:37:12 +0530
From: Anup Patel <anup@...infault.org>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Anup Patel <Anup.Patel@....com>,
Palmer Dabbelt <palmer@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Radim K <rkrcmar@...hat.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Thomas Gleixner <tglx@...utronix.de>,
Atish Patra <Atish.Patra@....com>,
Alistair Francis <Alistair.Francis@....com>,
Damien Le Moal <Damien.LeMoal@....com>,
Christoph Hellwig <hch@...radead.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v2 11/19] RISC-V: KVM: Implement VMID allocator
On Fri, Aug 2, 2019 at 2:49 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
>
> On 02/08/19 09:48, Anup Patel wrote:
> > +struct kvm_vmid {
> > + unsigned long vmid_version;
> > + unsigned long vmid;
> > +};
> > +
>
> Please document that both fields are written under vmid_lock, and read
> outside it.
Sure, will add comments in asm/kvm_host.h
>
> > + /*
> > + * On SMP we know no other CPUs can use this CPU's or
> > + * each other's VMID after forced exit returns since the
> > + * vmid_lock blocks them from re-entry to the guest.
> > + */
> > + force_exit_and_guest_tlb_flush(cpu_all_mask);
>
> Please use kvm_flush_remote_tlbs(kvm) instead. All you need to do to
> support it is check for KVM_REQ_TLB_FLUSH and handle it by calling
> __kvm_riscv_hfence_gvma_all. Also, since your spinlock is global you
> probably should release it around the call to kvm_flush_remote_tlbs.
> (Think of an implementation that has a very small number of VMID bits).
Sure, I will use kvm_flush_remote_tlbs() here.
>
> > + if (unlikely(vmid_next == 0)) {
> > + WRITE_ONCE(vmid_version, READ_ONCE(vmid_version) + 1);
> > + vmid_next = 1;
> > + /*
> > + * On SMP we know no other CPUs can use this CPU's or
> > + * each other's VMID after forced exit returns since the
> > + * vmid_lock blocks them from re-entry to the guest.
> > + */
> > + force_exit_and_guest_tlb_flush(cpu_all_mask);
> > + }
> > +
> > + vmid->vmid = vmid_next;
> > + vmid_next++;
> > + vmid_next &= (1 << vmid_bits) - 1;
> > +
> > + /* Ensure VMID next update is completed */
> > + smp_wmb();
>
> This barrier is not necessary. Writes to vmid->vmid need not be ordered
> with writes to vmid->vmid_version, because the accesses happen in
> completely different places.
Yes, your right. There is already a WRITE_ONCE after it.
>
> (As a rule of thumb, each smp_wmb() should have a matching smp_rmb()
> somewhere, and this one doesn't).
Sure, thanks for the hint.
>
> Paolo
>
> > + WRITE_ONCE(vmid->vmid_version, READ_ONCE(vmid_version));
> > +
Regards,
Anup
Powered by blists - more mailing lists