lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1188236391.6364.14.camel@squirrel>
Date:	Mon, 27 Aug 2007 12:39:51 -0500
From:	Anthony Liguori <aliguori@...ibm.com>
To:	Avi Kivity <avi@...ranet.com>
Cc:	kvm-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] Implement emulator_write_phys()


On Mon, 2007-08-27 at 20:26 +0300, Avi Kivity wrote:
> Anthony Liguori wrote:
> > On Mon, 2007-08-27 at 18:45 +0300, Avi Kivity wrote:
> >   
> >> Anthony Liguori wrote:
> >>     
> >>> Since a hypercall may span two pages and is a gva, we need a function to write
> >>> to a gva that may span multiple pages.  emulator_write_phys() seems like the
> >>> logical choice for this.
> >>>
> >>> @@ -962,8 +962,35 @@ static int emulator_write_std(unsigned long addr,
> >>>  			      unsigned int bytes,
> >>>  			      struct kvm_vcpu *vcpu
> >>>       
> >> I think that emulator_write_emulated(), except for being awkwardly 
> >> named, should do the job.  We have enough APIs.
> >>
> >> But!  We may not overwrite the hypercall instruction while a vcpu may be 
> >> executing, since there's no atomicity guarantee for code fetch.  We have 
> >> to to be out of guest mode while writing that insn.
> >>     
> >
> >
> > Hrm, good catch.
> >
> > How can we get out of guest mode given SMP guest support?
> >
> >   
> 
> kvm_flush_remote_tlbs() is something that can be generalized.  
> Basically, you set a bit in each vcpu and send an IPI to take them out.
> 
> But that's deadlock prone and complex.  Maybe you can just take 
> kvm->lock, zap the mmu and the flush tlbs, and patch the instruction at 
> your leisure, as no vcpu will be able to map memory until the lock is 
> released.

This works for shadow paging but not necessarily with NPT.  Do code
fetches really not respect atomic writes?  We could switch to a 32-bit
atomic operation and that should result in no worse than the code being
patched twice.

Regards,

Anthony Liguori


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ