lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Oct 2013 08:41:56 +0800
From:	Xiao Guangrong <xiaoguangrong.eric@...il.com>
To:	Marcelo Tosatti <mtosatti@...hat.com>
Cc:	Gleb Natapov <gleb@...hat.com>,
	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
	avi.kivity@...il.com, pbonzini@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2 12/15] KVM: MMU: allow locklessly access shadow page table out of vcpu thread


On Oct 16, 2013, at 6:21 AM, Marcelo Tosatti <mtosatti@...hat.com> wrote:

> On Tue, Oct 15, 2013 at 06:57:05AM +0300, Gleb Natapov wrote:
>>> 
>>> Why is it safe to allow access, by the lockless page write protect
>>> side, to spt pointer for shadow page A that can change to a shadow page 
>>> pointer of shadow page B?
>>> 
>>> Write protect spte of any page at will? Or verify that in fact thats the
>>> shadow you want to write protect?
>>> 
>>> Note that spte value might be the same for different shadow pages, 
>>> so cmpxchg succeeding does not guarantees its the same shadow page that
>>> has been protected.
>>> 
>> Two things can happen: spte that we accidentally write protect is some
>> other last level spte - this is benign, it will be unprotected on next
>> fault.  
> 
> Nothing forbids two identical writable sptes to point to a same pfn. How
> do you know you are write protecting the correct one? (the proper gfn).
> 
> Lockless walk sounds interesting. By the time you get to the lower
> level, that might be a different spte.

That's safe. Since get-dirty-log is serialized by slot-lock the dirty-bit
can not be lost - even if we write-protect on the different memslot
 (the dirty bit is still set). The worst case is we write-protect on a
unnecessary spte and cause a extra #PF but that is really race.

And the lockless rmap-walker can detect the new spte so that
write-protection on the memslot is not missed.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists