[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ym1EztjkJIHrg4Qz@google.com>
Date: Sat, 30 Apr 2022 14:16:46 +0000
From: Oliver Upton <oupton@...gle.com>
To: Gavin Shan <gshan@...hat.com>
Cc: kvmarm@...ts.cs.columbia.edu, linux-kernel@...r.kernel.org,
eauger@...hat.com, Jonathan.Cameron@...wei.com,
vkuznets@...hat.com, will@...nel.org, shannon.zhaosl@...il.com,
james.morse@....com, mark.rutland@....com, maz@...nel.org,
pbonzini@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH v6 03/18] KVM: arm64: Add SDEI virtualization
infrastructure
Hi Gavin,
On Sat, Apr 30, 2022 at 07:38:29PM +0800, Gavin Shan wrote:
> Thank you for the comments and details. It should work by using bitmaps
> to represent event's states. I will adopt your proposed structs in next
> respin. However, there are more states needed. So I would adjust
> "struct kvm_sdei_vcpu" like below in next respin.
>
> struct kvm_sdei_vcpu {
> unsigned long registered; /* the event is registered or not */
> unsigned long enabled; /* the event is enabled or not */
> unsigned long unregistering; /* the event is pending for unregistration */
I'm not following why we need to keep track of the 'pending unregister'
state directly. Is it not possible to infer from (active && !registered)?
> unsigned long pending; /* the event is pending for delivery and handling */
> unsigned long active; /* the event is currently being handled */
>
> :
> <this part is just like what you suggested>
> };
>
> I rename @pending to @unregister. Besides, there are two states added:
>
> @pending: Indicate there has one event has been injected. The next step
> for the event is to deliver it for handling. For one particular
> event, we allow one pending event in the maximum.
Right, if an event retriggers when it is pending we still dispatch a
single event to the guest. And since we're only doing normal priority
events, it is entirely implementation defined which gets dispatched
first.
> @active: Indicate the event is currently being handled. The information
> stored in 'struct kvm_sdei_event_context' instance can be
> correlated with the event.
Does this need to be a bitmap though? We can't ever have more than one
SDEI event active at a time since this is private to a vCPU.
> Furthermore, it's fair enough to put the (vcpu) mask state into 'flags'
> field of struct kvm_vcpu_arch :)
I think you can get away with putting active in there too, I don't see
why we need more than a single bit for this info.
> > > > > > Do we need this if we disallow nesting events?
> > > > > >
> > > > >
> > > > > Yes, we need this. "event == NULL" is used as indication of invalid
> > > > > context. @event is the associated SDEI event when the context is
> > > > > valid.
> > > >
> > > > What if we use some other plumbing to indicate the state of the vCPU? MP
> > > > state comes to mind, for example.
> > > >
> > >
> > > Even the indication is done by another state, kvm_sdei_vcpu_context still
> > > need to be linked (associated) with the event. After the vCPU context becomes
> > > valid after the event is delivered, we still need to know the associated
> > > event when some of hypercalls are triggered. SDEI_1_0_FN_SDEI_EVENT_COMPLETE
> > > is one of the examples, we need to decrease struct kvm_sdei_event::event_count
> > > for the hypercall.
> >
> > Why do we need to keep track of how many times an event has been
> > signaled? Nothing in SDEI seems to suggest that the number of event
> > signals corresponds to the number of times the handler is invoked. In
> > fact, the documentation on SDEI_EVENT_SIGNAL corroborates this:
> >
> > """
> > The event has edgetriggered semantics and the number of event signals
> > may not correspond to the number of times the handler is invoked in the
> > target PE.
> > """
> >
> > DEN0054C 5.1.16.1
> >
> > So perhaps we queue at most 1 pending event for the guest.
> >
> > I'd also like to see if anyone else has thoughts on the topic, as I'd
> > hate for you to go back to the whiteboard again in the next spin.
> >
>
> Agreed. In next respin, we will have one pending event at most. Error
> can be returned if user attempts to inject event whose pending state
> (struct kvm_sdei_vcpu::pending) has been set.
I don't believe we can do that. The SDEI_EVENT_SIGNAL call should succeed,
even if the event was already pending.
> Indeed, the hardest part is to determine the data structures and
> functions we need. Oliver, your valuable comments are helping to
> bring this series to the right track. However, I do think it's
> helpful if somebody else can confirm the outcomes from the previous
> discussions. I'm not sure if Marc has time for a quick scan and provide
> comments.
>
> I would summarize the outcomes from our discussions, to help Marc
> or others to confirm:
Going to take a look at some of your later patches as well, just a heads
up.
> - Drop support for the shared event.
> - Dropsupport for the critical event.
> - The events in the implementations are all private and can be signaled
> (raised) by software.
> - Drop migration support for now, and we will consider it using
> pseudo firmware registers. So add-on patches are expected to support
> the migration in future.
Migration will be supported in a future spin of this series, not a
subsequent one right? :) I had just made the suggestion because there was
a lot of renovations that we were discussing.
> - Drop locking mechanism. All the functions are executed in vcpu context.
Well, not entirely. Just need to make sure atomics are used to post
events to another vCPU in the case of SDEI_EVENT_SIGNAL.
set_bit() fits the bill here, as we've discussed.
> - To use the data struct as you suggested. Besides, the vcpu's mask
> state is put to struct kvm_arch_vcpu::flags.
> enum kvm_sdei_event
> struct kvm_sdei_event_handler
> struct kvm_sdei_event_context
> struct kvm_sdei_vcpu
>
> Thanks,
> Gavin
>
--
Thanks,
Oliver
Powered by blists - more mailing lists