lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <052867ae1c997487d85c21e995feb5647ac6c458.camel@infradead.org>
Date:   Wed, 02 Dec 2020 19:02:33 +0000
From:   David Woodhouse <dwmw2@...radead.org>
To:     Joao Martins <joao.m.martins@...cle.com>,
        Ankur Arora <ankur.a.arora@...cle.com>
Cc:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 10/39] KVM: x86/xen: support upcall vector

On Wed, 2020-12-02 at 18:34 +0000, Joao Martins wrote:
> On 12/2/20 4:47 PM, David Woodhouse wrote:
> > On Wed, 2020-12-02 at 13:12 +0000, Joao Martins wrote:
> > > On 12/2/20 11:17 AM, David Woodhouse wrote:
> > > > I might be more inclined to go for a model where the kernel handles the
> > > > evtchn_pending/evtchn_mask for us. What would go into the irq routing
> > > > table is { vcpu, port# } which get passed to kvm_xen_evtchn_send().
> > > 
> > > But passing port to the routing and handling the sending of events wouldn't it lead to
> > > unnecessary handling of event channels which aren't handled by the kernel, compared to
> > > just injecting caring about the upcall?
> > 
> > Well, I'm generally in favour of *not* doing things in the kernel that
> > don't need to be there.
> > 
> > But if the kernel is going to short-circuit the IPIs and VIRQs, then
> > it's already going to have to handle the evtchn_pending/evtchn_mask
> > bitmaps, and actually injecting interrupts.
> > 
> 
> Right. I was trying to point that out in the discussion we had
> in next patch. But true be told, more about touting the idea of kernel
> knowing if a given event channel is registered for userspace handling,
> rather than fully handling the event channel.
> 
> I suppose we are able to provide both options to the VMM anyway
> i.e. 1) letting them handle it enterily in userspace by intercepting
> EVTCHNOP_send, or through the irq route if we want kernel to offload it.

Right. The kernel takes what it knows about and anything else goes up
to userspace.

I do like the way you've handled the vcpu binding in userspace, and the
kernel just knows that a given port goes to a given target CPU.

> 
> > For the VMM
> > API I think we should follow the Xen model, mixing the domain-wide and
> > per-vCPU configuration. It's the best way to faithfully model the
> > behaviour a true Xen guest would experience.
> > 
> > So KVM_XEN_ATTR_TYPE_CALLBACK_VIA can be used to set one of
> >  • HVMIRQ_callback_vector, taking a vector#
> >  • HVMIRQ_callback_gsi for the in-kernel irqchip, taking a GSI#
> > 
> > And *maybe* in a later patch it could also handle
> >  • HVMIRQ_callback_gsi for split-irqchip, taking an eventfd
> >  • HVMIRQ_callback_pci_intx, taking an eventfd (or a pair, for EOI?)
> > 
> 
> Most of the Xen versions we were caring had callback_vector and
> vcpu callback vector (despite Linux not using the latter). But if you're
> dating back to 3.2 and 4.1 well (or certain Windows drivers), I suppose
> gsi and pci-intx are must-haves.

Note sure about GSI but PCI-INTX is definitely something I've seen in
active use by customers recently. I think SLES10 will use that.

> I feel we could just accommodate it as subtype in KVM_XEN_ATTR_TYPE_CALLBACK_VIA.
> Don't see the adavantage in having another xen attr type.

Yeah, fair enough.

> But kinda have mixed feelings in having kernel handling all event channels ABI,
> as opposed to only the ones userspace asked to offload. It looks a tad unncessary besides
> the added gain to VMMs that don't need to care about how the internals of event channels.
> But performance-wise it wouldn't bring anything better. But maybe, the former is reason
> enough to consider it.

Yeah, we'll see. Especially when it comes to implementing FIFO event
channels, I'd rather just do it in one place — and if the kernel does
it anyway then it's hardly difficult to hook into that.

But I've been about as coherent as I can be in email, and I think we're
generally aligned on the direction. I'll do some more experiments and
see what I can get working, and what it looks like.

I'm focusing on making the shinfo stuff all use kvm_map_gfn() first.

Download attachment "smime.p7s" of type "application/x-pkcs7-signature" (5174 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ