[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46D30E25.1090607@qumranet.com>
Date: Mon, 27 Aug 2007 20:47:17 +0300
From: Avi Kivity <avi@...ranet.com>
To: Anthony Liguori <aliguori@...ibm.com>
CC: kvm-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] Implement emulator_write_phys()
Anthony Liguori wrote:
> On Mon, 2007-08-27 at 20:26 +0300, Avi Kivity wrote:
>
>> Anthony Liguori wrote:
>>
>>> On Mon, 2007-08-27 at 18:45 +0300, Avi Kivity wrote:
>>>
>>>
>>>> Anthony Liguori wrote:
>>>>
>>>>
>>>>> Since a hypercall may span two pages and is a gva, we need a function to write
>>>>> to a gva that may span multiple pages. emulator_write_phys() seems like the
>>>>> logical choice for this.
>>>>>
>>>>> @@ -962,8 +962,35 @@ static int emulator_write_std(unsigned long addr,
>>>>> unsigned int bytes,
>>>>> struct kvm_vcpu *vcpu
>>>>>
>>>>>
>>>> I think that emulator_write_emulated(), except for being awkwardly
>>>> named, should do the job. We have enough APIs.
>>>>
>>>> But! We may not overwrite the hypercall instruction while a vcpu may be
>>>> executing, since there's no atomicity guarantee for code fetch. We have
>>>> to to be out of guest mode while writing that insn.
>>>>
>>>>
>>> Hrm, good catch.
>>>
>>> How can we get out of guest mode given SMP guest support?
>>>
>>>
>>>
>> kvm_flush_remote_tlbs() is something that can be generalized.
>> Basically, you set a bit in each vcpu and send an IPI to take them out.
>>
>> But that's deadlock prone and complex. Maybe you can just take
>> kvm->lock, zap the mmu and the flush tlbs, and patch the instruction at
>> your leisure, as no vcpu will be able to map memory until the lock is
>> released.
>>
>
> This works for shadow paging but not necessarily with NPT.
NPT will have tlb and 2nd level pagetable flushing (to support memory
unplug, ballooning, guest swapping, and the like) so it's safe to assume
that it will continue to work.
> Do code
> fetches really not respect atomic writes? We could switch to a 32-bit
> atomic operation and that should result in no worse than the code being
> patched twice.
>
I think they don't respect it (the data cache and insn cache and fetch
pipeline are very separated for performance reasons, so it's not
unexpected).
I think the Intel manual has some guidelines on self-modifying code.
--
Any sufficiently difficult bug is indistinguishable from a feature.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists