lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59751932-92c3-5397-0706-13c4e1b38aa5@oracle.com>
Date:   Wed, 2 Dec 2020 20:12:50 +0000
From:   Joao Martins <joao.m.martins@...cle.com>
To:     David Woodhouse <dwmw2@...radead.org>,
        Ankur Arora <ankur.a.arora@...cle.com>
Cc:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 10/39] KVM: x86/xen: support upcall vector

On 12/2/20 7:02 PM, David Woodhouse wrote:
> On Wed, 2020-12-02 at 18:34 +0000, Joao Martins wrote:
>> On 12/2/20 4:47 PM, David Woodhouse wrote:
>>> On Wed, 2020-12-02 at 13:12 +0000, Joao Martins wrote:
>>>> On 12/2/20 11:17 AM, David Woodhouse wrote:
>>> For the VMM
>>> API I think we should follow the Xen model, mixing the domain-wide and
>>> per-vCPU configuration. It's the best way to faithfully model the
>>> behaviour a true Xen guest would experience.
>>>
>>> So KVM_XEN_ATTR_TYPE_CALLBACK_VIA can be used to set one of
>>>  • HVMIRQ_callback_vector, taking a vector#
>>>  • HVMIRQ_callback_gsi for the in-kernel irqchip, taking a GSI#
>>>
>>> And *maybe* in a later patch it could also handle
>>>  • HVMIRQ_callback_gsi for split-irqchip, taking an eventfd
>>>  • HVMIRQ_callback_pci_intx, taking an eventfd (or a pair, for EOI?)
>>>
>>
>> Most of the Xen versions we were caring had callback_vector and
>> vcpu callback vector (despite Linux not using the latter). But if you're
>> dating back to 3.2 and 4.1 well (or certain Windows drivers), I suppose
>> gsi and pci-intx are must-haves.
> 
> Note sure about GSI but PCI-INTX is definitely something I've seen in
> active use by customers recently. I think SLES10 will use that.
> 

Some of the Windows drivers we used were relying on GSI.

I don't know about what kernel is SLES10 but Linux is aware
of XENFEAT_hvm_callback_vector since 2.6.35 i.e. about 10years.
Unless some other out-of-tree patch is opting it out I suppose.

> 
>> But kinda have mixed feelings in having kernel handling all event channels ABI,
>> as opposed to only the ones userspace asked to offload. It looks a tad unncessary besides
>> the added gain to VMMs that don't need to care about how the internals of event channels.
>> But performance-wise it wouldn't bring anything better. But maybe, the former is reason
>> enough to consider it.
> 
> Yeah, we'll see. Especially when it comes to implementing FIFO event
> channels, I'd rather just do it in one place — and if the kernel does
> it anyway then it's hardly difficult to hook into that.
> 
Fortunately that's xen 4.3 and up *I think* :) (the FIFO events)

And Linux is the one user I am aware IIRC.

> But I've been about as coherent as I can be in email, and I think we're
> generally aligned on the direction. 

Yes, definitely.

> I'll do some more experiments and
> see what I can get working, and what it looks like.
> 
> I'm focusing on making the shinfo stuff all use kvm_map_gfn() first.
> 
I was chatting with Ankur, and we can't fully 100% remember why we dropped using
kvm_vcpu_map/kvm_map_gfn. We were using kvm_vcpu_map() but at the time the new guest
mapping series was in discussion, so we dropped those until it settled in.

One "side effect" on mapping shared_info with kvm_vcpu_map, is that we have to loop all
vcpus unless we move shared_info elsewhere IIRC. But switching vcpu_info, vcpu_time_info
(and steal clock) to kvm_vcpu_map is trivial.. at least based on our old wip branches here.

	Joao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ