[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20130316111330.6e077a421aa580d80dc08641@gmail.com>
Date: Sat, 16 Mar 2013 11:13:30 +0900
From: Takuya Yoshikawa <takuya.yoshikawa@...il.com>
To: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Gleb Natapov <gleb@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH 5/5] KVM: MMU: fast invalid all mmio sptes
On Fri, 15 Mar 2013 23:29:53 +0800
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com> wrote:
> +/*
> + * The caller should protect concurrent access on
> + * kvm->arch.mmio_invalid_gen. Currently, it is used by
> + * kvm_arch_commit_memory_region and protected by kvm->slots_lock.
> + */
> +void kvm_mmu_invalid_mmio_spte(struct kvm *kvm)
kvm_mmu_invalidate_mmio_sptes() may be a better name.
Thanks,
Takuya
> +{
> + /* Ensure update memslot has been completed. */
> + smp_mb();
> +
> + trace_kvm_mmu_invalid_mmio_spte(kvm);
> +
> + /*
> + * The very rare case: if the generation-number is round,
> + * zap all shadow pages.
> + */
> + if (unlikely(kvm->arch.mmio_invalid_gen++ == MAX_GEN)) {
> + kvm->arch.mmio_invalid_gen = 0;
> + return kvm_mmu_zap_all(kvm);
> + }
> +}
> +
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists