lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h76z7twi.fsf@redhat.com>
Date:   Mon, 11 Apr 2022 13:15:41 +0200
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Michael Kelley <mikelley@...rosoft.com>,
        Siddharth Chandrasekaran <sidcha@...zon.de>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 03/31] KVM: x86: hyper-v: Handle
 HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST{,EX} calls gently

Sean Christopherson <seanjc@...gle.com> writes:

> On Thu, Apr 07, 2022, Vitaly Kuznetsov wrote:

...

Thanks a lot for the review! I'll incorporate your feedback into v3.

>>  
>>  static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
>> @@ -1857,12 +1940,13 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
>>  	struct hv_tlb_flush_ex flush_ex;
>>  	struct hv_tlb_flush flush;
>>  	DECLARE_BITMAP(vcpu_mask, KVM_MAX_VCPUS);
>> +	u64 entries[KVM_HV_TLB_FLUSH_RING_SIZE - 2];
>
> What's up with the -2?

(This should probably be a define or at least a comment somewhere)

Normally, we can only put 'KVM_HV_TLB_FLUSH_RING_SIZE - 1' entries on
the ring as when read_idx == write_idx we percieve this as 'ring is
empty' and not as 'ring is full'. For the TLB flush ring we must always
leave one free entry to put "flush all" request when we run out of
free space to avoid blocking the writer. I.e. when a request flies in,
we check if we have enough space on the ring to put all the entries and
if not, we just put 'flush all' there. In case 'flush all' is already on
the ring, ignoring the request is safe.

So, long story short, there's no point in fetching more than
'KVM_HV_TLB_FLUSH_RING_SIZE - 2' entries from the guest as we can't
possibly put them all on the ring.

[snip]

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ