[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56A82670.50506@gmail.com>
Date: Wed, 27 Jan 2016 10:07:44 +0800
From: Yang Zhang <yang.zhang.wz@...il.com>
To: "rkrcmar@...hat.com" <rkrcmar@...hat.com>
Cc: "Wu, Feng" <feng.wu@...el.com>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: Re: [PATCH v3 1/4] KVM: Recover IRTE to remapped mode if the
interrupt is not single-destination
On 2016/1/27 2:22, rkrcmar@...hat.com wrote:
> 2016-01-26 09:44+0800, Yang Zhang:
>> On 2016/1/25 21:59, rkrcmar@...hat.com wrote:
>>> 2016-01-25 09:49+0800, Yang Zhang:
>>>> On 2016/1/22 21:31, rkrcmar@...hat.com wrote:
>>>>> 2016-01-22 10:03+0800, Yang Zhang:
>>>>>> Not so complicated. We can reuse the wake up vector and check whether the
>>>>>> interrupt is multicast when one of destination vcpu handles it.
>>>>>
>>>>> I'm not sure what you mean now ... I guess it is:
>>>>> - Deliver the interrupt to a guest VCPU and relay the multicast to other
>>>>> VCPUs. No, it's strictly worse than intercepting it in the host.
>>>>
>>>> It is still handled in host context not guest context. The wakeup event
>>>> cannot be consumed like posted event.
>>>
>>> Ok. ("when one of destination vcpu handles it" confused me into
>>> thinking that you'd like to handle it with the notification vector.)
>>
>> Sorry for my poor english. :(
>
> It's good. Ambiguity is hard to avoid if a reader doesn't want to
> assume only the most likely meaning.
>
>>>>> Also, if wakeup vector were used for wakeup and multicast, we'd be
>>>>> uselessly doing work, because we can't tell which reason triggered the
>>>>> interrupt before finishing one part -- using separate vectors for that
>>>>> would be a bit nicer.
>>>
>>> (imprecise -- we would always have to check for ON bit of all PIDs from
>>> blocked VCPUs, for the original meaning of wakeup vector, and always
>>
>> This is what KVM does currently.
>
> Yep.
>
>>> either read the PIRR or check for ON bit of all PIDs that encode
>>> multicast interrupts; then we have to clear ON bits for multicasts.)
>>
>> Also, most part of work is covered by current logic except checking the
>> multicast.
>
> We could reuse the setup that gets us to wakeup_handler, but there is
> nothing to share in the handler itself. Sharing a handler means that we
> always have to execute both parts.
I don't quite understand it. There is nothing need to be modified for
wakeup logic. The only thing we need to do is add the checking before
the vcpu pick up the pending interrupt(This is happened in VCPU context,
not in handler).
>
> We must create new PID anyway and compared to the extra work needed for
> multicast handling, a new vector + handler is a relatively small code
> investment that adds clarity to the design (and performance).
No new PID is needed. If the target vcpu is running, no additional work
is required in wakeup handler. If target vcpu is not running, the
current logic will wake up the vcpu, then let vcpu itself to check
whether pending interrupt is a multicast and handle it in vcpu's context.
>
> (Taking the vector splitting to the extreme, we'd improve performance if
> we added a vector per assigned device. That is practically the same as
> non-posted mode, just more complicated.)
>
>>> ---
>>> There might be a benefit of using posted interrupts for host interrupts
>>> when we run out of free interrupt vectors: we could start using vectors
>>> by multiple sources through posted interrupts, if using posted
>>
>> Do you mean per vcpu posted interrupts?
>
> I mean using posting for host device interrupts (no virt involved).
>
> Let's say we have 300 devices for one CPU and CPU has 200 useable
> vectors. We have 100 device interrupts that need to be shared in some
> vectors and using posting might be faster than directly checking
> multiple devices.
Yes, this is an good point.
--
best regards
yang
Powered by blists - more mailing lists