[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E3917DC.9040902@cn.fujitsu.com>
Date: Wed, 03 Aug 2011 17:41:48 +0800
From: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
To: Avi Kivity <avi@...hat.com>
CC: Marcelo Tosatti <mtosatti@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, KVM <kvm@...r.kernel.org>
Subject: Re: [PATCH v2 02/12] KVM: x86: tag the instructions which are used
to write page table
On 08/03/2011 05:25 PM, Avi Kivity wrote:
> On 08/03/2011 12:24 PM, Xiao Guangrong wrote:
>> > Maybe it's better to emulate if we can't find a fix for that.
>> >
>> > One way would be to emulate every 20 instructions; this breaks us out of the loop but reduces costly emulations to 5%.
>> >
>>
>> After much thought about this, may be this optimization is not good since:
>> - it is little complex
>> - this optimization is only applied to the instruction emulation caused by #PF
>> - it does not improve too much:
>> if we emulate the instruction, we need to do:
>> - decode instruction
>> - emulate it
>> - zap shadow pages
>> And do this, it can return to the guest, the guest can run the next instruction
>>
>> if we retry the instruction, we need to do:
>> - decode instruction
>> - zap shadow pages
>> then return to the guest and retry the instruction, however, we will get page fault
>> again(since the mapping is still read-only), so we will get another VM-exit and need
>> to do:
>> # trigger page fault
>> - handle the page fault and change the mapping to writable
>> - retry the instruction
>> until now, the guest can run the next instruction
>>
>> So, i do not think the new way is better, your opinion?
>
> We can change to writeable and zap in the same exit, no? Basically call page_fault() again after zapping.
>
OK. :-)
Um, how about cache the last eip when we do this optimization, if we meet the same eip, we
can break out the potential infinite loop?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists