[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50A42A5E.5070905@linux.vnet.ibm.com>
Date: Thu, 15 Nov 2012 07:33:50 +0800
From: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
CC: Takuya Yoshikawa <takuya.yoshikawa@...il.com>, avi@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
qemu-devel@...gnu.org, owasserm@...hat.com, quintela@...hat.com,
pbonzini@...hat.com, chegu_vinod@...com, yamahata@...inux.co.jp
Subject: Re: [PATCH] KVM: MMU: lazily drop large spte
On 11/14/2012 10:44 PM, Marcelo Tosatti wrote:
> On Wed, Nov 14, 2012 at 12:33:50AM +0900, Takuya Yoshikawa wrote:
>> Ccing live migration developers who should be interested in this work,
>>
>> On Mon, 12 Nov 2012 21:10:32 -0200
>> Marcelo Tosatti <mtosatti@...hat.com> wrote:
>>
>>> On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
>>>> Do not drop large spte until it can be insteaded by small pages so that
>>>> the guest can happliy read memory through it
>>>>
>>>> The idea is from Avi:
>>>> | As I mentioned before, write-protecting a large spte is a good idea,
>>>> | since it moves some work from protect-time to fault-time, so it reduces
>>>> | jitter. This removes the need for the return value.
>>>>
>>>> Signed-off-by: Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
>>>> ---
>>>> arch/x86/kvm/mmu.c | 34 +++++++++-------------------------
>>>> 1 files changed, 9 insertions(+), 25 deletions(-)
>>>
>>> Its likely that other 4k pages are mapped read-write in the 2mb range
>>> covered by a read-only 2mb map. Therefore its not entirely useful to
>>> map read-only.
>>>
>>> Can you measure an improvement with this change?
>>
>> What we discussed at KVM Forum last week was about the jitter we could
>> measure right after starting live migration: both Isaku and Chegu reported
>> such jitter.
>>
>> So if this patch reduces such jitter for some real workloads, by lazily
>> dropping largepage mappings and saving read faults until that point, that
>> would be very nice!
>>
>> But sadly, what they measured included interactions with the outside of the
>> guest, and the main cause was due to the big QEMU lock problem, they guessed.
>> The order is so different that an improvement by a kernel side effort may not
>> be seen easily.
>>
>> FWIW: I am now changing the initial write protection by
>> kvm_mmu_slot_remove_write_access() to rmap based as I proposed at KVM Forum.
>> ftrace said that 1ms was improved to 250-350us by the change for 10GB guest.
>> My code still drops largepage mappings, so the initial write protection time
>> itself may not be a such big issue here, I think.
>>
>> Again, if we can eliminate read faults to such an extent that guests can see
>> measurable improvement, that should be very nice!
>>
>> Any thoughts?
>>
>> Thanks,
>> Takuya
>
> OK, makes sense. I'm worried about shadow / oos interactions
> with large read-only mappings (trying to remember what was the
> case exactly, it might be non-existant now).
Marcelo, i guess commit 38187c830cab84daecb41169948467f1f19317e3 is what you
mentioned, but i do not know how it can "Simplifies out of sync shadow." :(
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists