[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17e66ddd-cd08-9749-a27b-ac81bf0d3c5d@suse.com>
Date: Fri, 15 Sep 2017 15:00:37 +0200
From: Juergen Gross <jgross@...e.com>
To: Andrew Cooper <andrew.cooper3@...rix.com>,
linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Cc: boris.ostrovsky@...cle.com, Jan Beulich <JBeulich@...e.com>
Subject: Re: [Xen-devel] [PATCH 4/4] xen: select grant interface version
On 13/09/17 11:23, Juergen Gross wrote:
> On 12/09/17 20:54, Andrew Cooper wrote:
>> On 08/09/17 15:48, Juergen Gross wrote:
>>> static void gnttab_request_version(void)
>>> {
>>> - int rc;
>>> + long rc;
>>> struct gnttab_set_version gsv;
>>>
>>> - gsv.version = 1;
>>> + rc = HYPERVISOR_memory_op(XENMEM_maximum_ram_page, NULL);
>>
>> This hypercall is information leak and layering violation. Please can
>> we avoid adding more dependence on its presence? (I'm got a
>> proto-series which strips various corners off the hypervisor for attack
>> surface reduction purposes, and this hypercall is one victim which is
>> restricted to privileged domains only.)
>>
>> For translated guests, it is definitely not the right number to check.
>> What matters is the maximum frame inside the translated guest, not on
>> the host.
>
> Oh, right.
>
>> For PV guests, I'm not sure what to suggest, but the result of
>> XENMEM_maximum_ram_page isn't applicable. Xen's max_page can increase
>> at runtime through memory hotplug, after which ballooning operations can
>> leave Linux with a frame it wishes to grant which exceeds the limit
>> calculated here.
>
> We need a way to decide whether V2 is to be selected.
>
> Is there a way to determine which is the highest physical address being
> available for memory hotplug on a system? Something in ACPI tables
> perhaps?
So I've found the data I've searched in the hypervisor. The maximum
frame number to expect can be calculated from max_page, mem_hotplug
and the maximum physical address from cpuid node 0x80000008. If
CONFIG_BIGMEM isn't defined in Xen it is 16TB max.
The question is how to present this value to a guest. IMHO something
like the maximum address width similar to cpuid node 0x80000008
would be fine. It could be above width for pv guests and the max.
memory address of the guest for HVM guests (adding a cap for those
wouldn't be the worst idea, I guess).
What about a new subop of the xen_version hypercall?
Juergen
Powered by blists - more mailing lists