lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40a793f9-c58f-7b0e-5835-d83eed9f6ba0@redhat.com>
Date:   Wed, 30 Sep 2020 08:28:26 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>,
        Ben Gardon <bgardon@...gle.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Cannon Matthews <cannonmatthews@...gle.com>,
        Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
        Peter Feiner <pfeiner@...gle.com>,
        Junaid Shahid <junaids@...gle.com>,
        Jim Mattson <jmattson@...gle.com>,
        Yulei Zhang <yulei.kernel@...il.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 07/22] kvm: mmu: Support zapping SPTEs in the TDP MMU

On 30/09/20 08:15, Sean Christopherson wrote:
>>  	kvm_zap_obsolete_pages(kvm);
>> +
>> +	if (kvm->arch.tdp_mmu_enabled)
>> +		kvm_tdp_mmu_zap_all(kvm);
> 
> Haven't looked into how this works; is kvm_tdp_mmu_zap_all() additive to
> what is done by the legacy zapping, or is it a replacement?

It's additive because the shadow MMU is still used for nesting.

>> +
>>  	spin_unlock(&kvm->mmu_lock);
>>  }
>> @@ -57,8 +58,13 @@ bool is_tdp_mmu_root(struct kvm *kvm, hpa_t hpa)
>>  	return root->tdp_mmu_page;
>>  }
>>  
>> +static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,
>> +			  gfn_t start, gfn_t end);
>> +
>>  static void free_tdp_mmu_root(struct kvm *kvm, struct kvm_mmu_page *root)
>>  {
>> +	gfn_t max_gfn = 1ULL << (boot_cpu_data.x86_phys_bits - PAGE_SHIFT);
> 
> BIT_ULL(...)

Not sure about that.  Here the point is not to have a single bit, but to
do a power of two.  Same for the version below.

>> + * If the MMU lock is contended or this thread needs to yield, flushes
>> + * the TLBs, releases, the MMU lock, yields, reacquires the MMU lock,
>> + * restarts the tdp_iter's walk from the root, and returns true.
>> + * If no yield is needed, returns false.
>> + */
>> +static bool tdp_mmu_iter_cond_resched(struct kvm *kvm, struct tdp_iter *iter)
>> +{
>> +	if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>> +		kvm_flush_remote_tlbs(kvm);
>> +		cond_resched_lock(&kvm->mmu_lock);
>> +		tdp_iter_refresh_walk(iter);
>> +		return true;
>> +	} else {
>> +		return false;
>> +	}
> 
> Kernel style is to not bother with an "else" if the "if" returns.

I have rewritten all of this in my version anyway. :)

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ