lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <516FD76F.6090306@linux.vnet.ibm.com>
Date:	Thu, 18 Apr 2013 19:22:23 +0800
From:	Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>
To:	Gleb Natapov <gleb@...hat.com>
CC:	mtosatti@...hat.com, avi.kivity@...il.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v3 08/15] KVM: MMU: allow unmap invalid rmap out of mmu-lock

On 04/18/2013 07:00 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>> pte_list_clear_concurrently allows us to reset pte-desc entry
>> out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
>> lifecycle of sp, we use this way to achieve the goal:
>>
>> unmap_memslot_rmap_nolock():
>> for-each-rmap-in-slot:
>>       preempt_disable
>>       kvm->arch.being_unmapped_rmap = rmapp
>>       clear spte and reset rmap entry
>>       kvm->arch.being_unmapped_rmap = NULL
>>       preempt_enable
>>
>> Other patch like zap-sp and mmu-notify which are protected
>> by mmu-lock:
>>       clear spte and reset rmap entry
>> retry:
>>       if (kvm->arch.being_unmapped_rmap == rmap)
>> 		goto retry
>> (the wait is very rare and clear one rmap is very fast, it
>> is not bad even if wait is needed)
>>
> I do not understand what how this achieve the goal. Suppose that rmap
> == X and kvm->arch.being_unmapped_rmap == NULL so "goto retry" is skipped,
> but moment later unmap_memslot_rmap_nolock() does
> vm->arch.being_unmapped_rmap = X.

Access rmap is always safe since rmap and its entries are valid until
memslot is destroyed.

This algorithm protects spte since it can be freed in the protection of mmu-lock.

In your scenario:

======
   CPU 1                                      CPU 2

vcpu / mmu-notify access the RMAP         unmap rmap out of mmu-lock which is under
which is under mmu-lock                   slot-lock

zap spte1
clear RMAP entry

kvm->arch.being_unmapped_rmap = NULL,
do not wait

free spte1

                                        set kvm->arch.being_unmapped_rmap = RMAP
                                        walking RMAP and do not see spet1 on RMAP
                                        (the entry of spte 1 has been reset by CPU 1)
                                        set kvm->arch.being_unmapped_rmap = NULL
======

That protect CPU 2 can not access the freed-spte.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ