[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8960b809-0faa-58e5-4839-b28a09f161d6@de.ibm.com>
Date: Wed, 22 Feb 2017 20:57:09 +0100
From: Christian Borntraeger <borntraeger@...ibm.com>
To: Radim Krčmář <rkrcmar@...hat.com>,
David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Andrew Jones <drjones@...hat.com>,
Marc Zyngier <marc.zyngier@....com>,
Cornelia Huck <cornelia.huck@...ibm.com>,
James Hogan <james.hogan@...tec.com>,
Paul Mackerras <paulus@...abs.org>,
Christoffer Dall <christoffer.dall@...aro.org>
Subject: Re: [PATCH 4/5] KVM: add __kvm_request_needs_mb
On 02/22/2017 04:17 PM, Radim Krčmář wrote:
>
[...]
> while (vcpu->arch.sie_block->prog0c & PROG_IN_SIE)
> cpu_relax();
> }
> And out of curiosity -- how many cycles does this loop usually take?
A quick hack indicates something between 3 and 700ns.
>> 2. Remote requests that don't need a sync
>>
>> E.g. KVM_REQ_ENABLE_IBS doesn't strictly need it, while
>> KVM_REQ_DISABLE_IBS does.
>
> A usual KVM request would kick the VCPU out of nested virt as well.
> Shouldn't it be done for these as well?
A common code function probably should. For some of the cases (again
prefix page handling) we do not need it. For example if we unmap
the guest prefix page, but guest^2 is running this causes no trouble
as long as we handle the request before reentering guest^1. So
not an easy answer.
>
>> 3. local requests
>>
>> E.g. KVM_REQ_TLB_FLUSH from kvm_s390_set_prefix()
>>
>>
>> Of course, having a unified interface would be better.
>>
>> /* set the request and kick the CPU out of guest mode */
>> kvm_set_request(req, vcpu);
>>
>> /* set the request, kick the CPU out of guest mode, wait until guest
>> mode has been left and make sure the request will be handled before
>> reentering guest mode */
>> kvm_set_sync_request(req, vcpu);
>
> Sounds good, I'll also add
>
> kvm_set_self_request(req, vcpu);
>
>> Same maybe even for multiple VCPUs (as there are then ways to speed it
>> up, e.g. first kick all, then wait for all)
>>
>> This would require arch specific callbacks to
>> 1. pre announce the request (e.g. set PROG_REQUEST on s390x)
>> 2. kick the cpu (e.g. CPUSTAT_STOP_INT and later
>> kvm_s390_vsie_kick(vcpu) on s390x)
>> 3. check if still executing the guest (e.g. PROG_IN_SIE on s390x)
>>
>> This would only make sense if there are other use cases for sync
>> requests. At least I remember that Power also has a faster way for
>> kicking VCPUs, not involving SMP rescheds. I can't judge if this is a
>> s390x only thing and is better be left as is :)
>>
>> At least vcpu_kick() could be quite easily made to work on s390x.
>>
>> Radim, are there also other users that need something like sync requests?
>
> I think that ARM has a similar need when updating vgic, but relies on an
> asumption that VCPUs are going to be out after kicking them with
> kvm_make_all_cpus_request().
> (vgic_change_active_prepare in virt/kvm/arm/vgic/vgic-mmio.c)
>
> Having synchronous requests in a common API should probably wait for the
> completion of the request, not just for the kick, which would make race
> handling simpler.
This would be problematic for our prefix page handling due to locking.
Powered by blists - more mailing lists