[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <abd2c34a-fdd9-4ef8-0679-7b4156da689d@oracle.com>
Date: Thu, 27 Oct 2016 12:23:20 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Andrew Cooper <andrew.cooper3@...rix.com>, david.vrabel@...rix.com,
JGross@...e.com
Cc: xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
roger.pau@...rix.com, Jan Beulich <jbeulich@...e.com>
Subject: Re: [Xen-devel] [PATCH 8/8] xen/pvh: Enable CPU hotplug
On 10/27/2016 11:00 AM, Andrew Cooper wrote:
> On 27/10/16 15:25, Boris Ostrovsky wrote:
>>
>>
>> On 10/14/2016 03:01 PM, Boris Ostrovsky wrote:
>>> On 10/14/2016 02:41 PM, Andrew Cooper wrote:
>>>> On 14/10/16 19:05, Boris Ostrovsky wrote:
>>>>> PVH guests don't receive ACPI hotplug interrupts and therefore
>>>>> need to monitor xenstore for CPU hotplug event.
>>>> Why not? If they don't, they should. As we are providing ACPI anyway,
>>>> we should provide all bits of it.
>>>
>>> We don't have IOAPIC, which is how these interrupts are typically
>>> delivered. I suppose we might be able to specify it as something else.
>>>
>>> I'll look into this.
>>
>>
>> (+Jan)
>>
>> Yes, we can do this. The main issue is how to deal with event
>> registers (i.e FADT.x_pm1a_evt_blk) and AML's PRST region (which
>> specifies online CPU map).
>>
>> Currently these are accessed via IO space and are handled by qemu.
>>
>> There are a couple of ways to deal with this that I can see.
>>
>> 1. We can implement ioreq handling in the hypervisor, there are only a
>> few addresses that need handling.
>>
>> 2. We can implement those registers in memory space and have libxl
>> update them them on a hotplug command. This appears to be possible
>> because these registers mostly just consume writes without side
>> effects so they can be simple memory locations. The one exception is
>> updating status bits (they are cleared by writing 1s) but I think we
>> can do this from the AML.
>>
>> Other than that the only other thing is setting up an event channel
>> between the toolstack and the guest (either via xenstore or perhaps by
>> having a reserved port for SCI).
>>
>> I have a prototype with (2) (except for the bit clearing part) but I
>> want to hear comments on this approach before I write proper patches.
>
> Xen already deals with 1 for HVM guests. We should do the same for PVH
> guests as well.
OK. The reason I was a little hesitant to go that route was because I
didn't want to add more code to hypervisor.
But this is quite simpler than (2) --- we can keep all ACPI data common
with HVM.
>
> -1 to anything involving looping a PVH dom0 back around to some entity
> running inside dom0.
This hotplug path would only be used by an unprivileged domain (i.e. the
one whose ACPI data is created by the toolstack). But TBH I haven't
thought about dom0 at all. And for that (1) is probably the only option.
-boris
Powered by blists - more mailing lists