lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e0b6e81-b987-487e-b582-4d61aec9252d@redhat.com>
Date:   Fri, 1 Dec 2017 16:15:37 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Radim Krčmář <rkrcmar@...hat.com>,
        Fabian Grünbichler <f.gruenbichler@...xmox.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        kvm@...r.kernel.org, Andrea Arcangeli <aarcange@...hat.com>,
        Jérôme Glisse <jglisse@...hat.com>
Subject: Re: [PATCH 2/2] TESTING! KVM: x86: add invalidate_range mmu notifier

On 30/11/2017 19:05, Radim Krčmář wrote:
> Does roughly what kvm_mmu_notifier_invalidate_page did before.
> 
> I am not certain why this would be needed.  It might mean that we have
> another bug with start/end or just that I missed something.

I don't think this is needed, because we don't have shared page tables.
My understanding is that without shared page tables, you can assume that
all page modifications go through invalidate_range_start/end.  With
shared page tables, there are additional TLB flushes to take care of,
which require invalidate_range.

Thanks,

Paolo

> Please try just [1/2] first and apply this one only if [1/2] still bugs,
> thanks!
> ---
>  virt/kvm/kvm_main.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index b7f4689e373f..0825ea624f16 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -342,6 +342,29 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>  	srcu_read_unlock(&kvm->srcu, idx);
>  }
>  
> +static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn,
> +						    struct mm_struct *mm,
> +						    unsigned long start,
> +						    unsigned long end)
> +{
> +	struct kvm *kvm = mmu_notifier_to_kvm(mn);
> +	int need_tlb_flush = 0, idx;
> +
> +	idx = srcu_read_lock(&kvm->srcu);
> +	spin_lock(&kvm->mmu_lock);
> +	kvm->mmu_notifier_seq++;
> +	need_tlb_flush = kvm_unmap_hva_range(kvm, start, end);
> +	need_tlb_flush |= kvm->tlbs_dirty;
> +	if (need_tlb_flush)
> +		kvm_flush_remote_tlbs(kvm);
> +
> +	spin_unlock(&kvm->mmu_lock);
> +
> +	kvm_arch_mmu_notifier_invalidate_range(kvm, start, end);
> +
> +	srcu_read_unlock(&kvm->srcu, idx);
> +}
> +
>  static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
>  						    struct mm_struct *mm,
>  						    unsigned long start,
> @@ -476,6 +499,7 @@ static void kvm_mmu_notifier_release(struct mmu_notifier *mn,
>  }
>  
>  static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {
> +	.invalidate_range	= kvm_mmu_notifier_invalidate_range,
>  	.invalidate_range_start	= kvm_mmu_notifier_invalidate_range_start,
>  	.invalidate_range_end	= kvm_mmu_notifier_invalidate_range_end,
>  	.clear_flush_young	= kvm_mmu_notifier_clear_flush_young,
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ