lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 30 Apr 2022 19:38:29 +0800
From:   Gavin Shan <gshan@...hat.com>
To:     Oliver Upton <oupton@...gle.com>
Cc:     kvmarm@...ts.cs.columbia.edu, linux-kernel@...r.kernel.org,
        eauger@...hat.com, Jonathan.Cameron@...wei.com,
        vkuznets@...hat.com, will@...nel.org, shannon.zhaosl@...il.com,
        james.morse@....com, mark.rutland@....com, maz@...nel.org,
        pbonzini@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH v6 03/18] KVM: arm64: Add SDEI virtualization
 infrastructure

Hi Oliver,

On 4/29/22 4:28 AM, Oliver Upton wrote:
> On Sun, Apr 24, 2022 at 11:00:56AM +0800, Gavin Shan wrote:
> 
> [...]
> 
>> Yes, The assumption that all events are always singled by software should
>> be true. So this field (@signaled) can be dropped either. So I plan to
>> change the data structures like below, according to the suggestions given
>> by you. Please double check if there are anything missed.
>>
>> (1) Those fields of struct kvm_sdei_exposed_event are dropped or merged
>>      to struct kvm_sdei_event.
>>
>>      struct kvm_sdei_event {
>>             unsigned int          num;
>>             unsigned long         ep_addr;
>>             unsigned long         ep_arg;
>> #define KVM_SDEI_EVENT_STATE_REGISTERED         0
>> #define KVM_SDEI_EVENT_STATE_ENABLED            1
>> #define KVM_SDEI_EVENT_STATE_UNREGISTER_PENDING 2
>>             unsigned long         state;                 /* accessed by {test,set,clear}_bit() */
>>             unsigned long         event_count;
>>      };
>>
>> (2) In arch/arm64/kvm/sdei.c
>>
>>      static kvm_sdei_event exposed_events[] = {
>>             { .num = SDEI_SW_SIGNALED_EVENT },
>>      };
>>
>> (3) In arch/arm64/kvm/sdei.c::kvm_sdei_create_vcpu(), the SDEI events
>>      are instantiated based on @exposed_events[]. It's just what we're
>>      doing and nothing is changed.
> 
> The part I find troubling is the fact that we are treating SDEI events
> as a list-like thing. If we want to behave more like hardware, why can't
> we track the state of an event in bitmaps? There are three bits of
> relevant state for any given event in the context of a vCPU: registered,
> enabled, and pending.
> 
> I'm having some second thoughts about the suggestion to use MP state for
> this, given that we need to represent a few bits of state for the vCPU
> as well. Seems we need to track the mask state of a vCPU and a bit to
> indicate whether an SDEI handler is active. You could put these bits in
> kvm_vcpu_arch::flags, actually.
> 
> So maybe it could be organized like so:
> 
>    /* bits for the bitmaps below */
>    enum kvm_sdei_event {
>    	KVM_SDEI_EVENT_SW_SIGNALED = 0,
> 	KVM_SDEI_EVENT_ASYNC_PF,
> 	...
> 	NR_KVM_SDEI_EVENTS,
>    };
> 
>    struct kvm_sdei_event_handler {
>    	unsigned long ep_addr;
> 	unsigned long ep_arg;
>    };
> 
>    struct kvm_sdei_event_context {
>    	unsigned long pc;
> 	unsigned long pstate;
> 	unsigned long regs[18];
>    };
> 
>    struct kvm_sdei_vcpu {
>    	unsigned long registered;
> 	unsigned long enabled;
> 	unsigned long pending;
> 
> 	struct kvm_sdei_event_handler handlers[NR_KVM_SDEI_EVENTS];
> 	struct kvm_sdei_event_context ctxt;
>    };
> 
> But it is hard to really talk about these data structures w/o a feel for
> the mechanics of working the series around it.
> 

Thank you for the comments and details. It should work by using bitmaps
to represent event's states. I will adopt your proposed structs in next
respin. However, there are more states needed. So I would adjust
"struct kvm_sdei_vcpu" like below in next respin.

     struct kvm_sdei_vcpu {
         unsigned long registered;    /* the event is registered or not                 */
         unsigned long enabled;       /* the event is enabled or not                    */
         unsigned long unregistering; /* the event is pending for unregistration        */
         unsigned long pending;       /* the event is pending for delivery and handling */
         unsigned long active;        /* the event is currently being handled           */

         :
         <this part is just like what you suggested>
     };

I rename @pending to @unregister. Besides, there are two states added:

    @pending: Indicate there has one event has been injected. The next step
              for the event is to deliver it for handling. For one particular
              event, we allow one pending event in the maximum.
    @active:  Indicate the event is currently being handled. The information
              stored in 'struct kvm_sdei_event_context' instance can be
              correlated with the event.

Furthermore, it's fair enough to put the (vcpu) mask state into 'flags'
field of struct kvm_vcpu_arch :)

>>>>> Do we need this if we disallow nesting events?
>>>>>
>>>>
>>>> Yes, we need this. "event == NULL" is used as indication of invalid
>>>> context. @event is the associated SDEI event when the context is
>>>> valid.
>>>
>>> What if we use some other plumbing to indicate the state of the vCPU? MP
>>> state comes to mind, for example.
>>>
>>
>> Even the indication is done by another state, kvm_sdei_vcpu_context still
>> need to be linked (associated) with the event. After the vCPU context becomes
>> valid after the event is delivered, we still need to know the associated
>> event when some of hypercalls are triggered. SDEI_1_0_FN_SDEI_EVENT_COMPLETE
>> is one of the examples, we need to decrease struct kvm_sdei_event::event_count
>> for the hypercall.
> 
> Why do we need to keep track of how many times an event has been
> signaled? Nothing in SDEI seems to suggest that the number of event
> signals corresponds to the number of times the handler is invoked. In
> fact, the documentation on SDEI_EVENT_SIGNAL corroborates this:
> 
> """
> The event has edgetriggered semantics and the number of event signals
> may not correspond to the number of times the handler is invoked in the
> target PE.
> """
> 
> DEN0054C 5.1.16.1
> 
> So perhaps we queue at most 1 pending event for the guest.
> 
> I'd also like to see if anyone else has thoughts on the topic, as I'd
> hate for you to go back to the whiteboard again in the next spin.
> 

Agreed. In next respin, we will have one pending event at most. Error
can be returned if user attempts to inject event whose pending state
(struct kvm_sdei_vcpu::pending) has been set.

Indeed, the hardest part is to determine the data structures and
functions we need. Oliver, your valuable comments are helping to
bring this series to the right track. However, I do think it's
helpful if somebody else can confirm the outcomes from the previous
discussions. I'm not sure if Marc has time for a quick scan and provide
comments.

I would summarize the outcomes from our discussions, to help Marc
or others to confirm:

- Drop support for the shared event.
- Dropsupport for the critical event.
- The events in the implementations are all private and can be signaled
   (raised) by software.
- Drop migration support for now, and we will consider it using
   pseudo firmware registers. So add-on patches are expected to support
   the migration in future.
- Drop locking mechanism. All the functions are executed in vcpu context.
- To use the data struct as you suggested. Besides, the vcpu's mask
   state is put to struct kvm_arch_vcpu::flags.
   enum kvm_sdei_event
   struct kvm_sdei_event_handler
   struct kvm_sdei_event_context
   struct kvm_sdei_vcpu

Thanks,
Gavin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ