lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Jul 2014 17:05:20 +0800
From:	Tang Chen <>
To:	Gleb Natapov <>
CC:	Jan Kiszka <>, <>,
	<>, <>,
	<>, <>,
	<>, <>
Subject: Re: [PATCH v2 5/5] kvm, mem-hotplug: Do not pin apic access page
 in memory.

Hi Gleb,

On 07/17/2014 09:57 PM, Gleb Natapov wrote:
> On Thu, Jul 17, 2014 at 09:34:20PM +0800, Tang Chen wrote:
>> Hi Gleb,
>> On 07/15/2014 08:40 PM, Gleb Natapov wrote:
>> ......
>>>> And yes, we have the problem you said here. We can migrate the page while L2
>>>> vm is running.
>>>> So I think we should enforce L2 vm to exit to L1. Right ?
>>> We can request APIC_ACCESS_ADDR reload during L2->L1 vmexit emulation, so
>>> if APIC_ACCESS_ADDR changes while L2 is running it will be reloaded for L1 too.
>> Sorry, I think I don't quite understand the procedure you are talking about
>> here.
>> Referring to the code, I think we have three machines: L0(host), L1 and L2.
>> And we have two types of vmexit: L2->L1 and L2->L0.  Right ?
>> We are now talking about this case: L2 and L1 shares the apic page.
>> Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
>> L1,
>> and update L1's VMCS. At this time, we are in L0, not L2. Why cannot we
> Using patch 5/5, when apic page is migrated on L0, mmu_notifier will notify
> L1 or L2 VMCS depending on which one happens to be running right now.
> If it is L1 then L2's VMCS will be updated during vmentry emulation,

OK, this is easy to understand.

>if it is
> L2 we need to request reload during vmexit emulation to make sure L1's VMCS is
> updated.

I'm a little confused here. In patch 5/5, I called 
make_all_cpus_request() to
force all vcpus exit to host. If we are in L2, where will the vcpus exit 
to ?
L1 or L0 ?


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists