lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C57BC6D.8060306@redhat.com>
Date:	Tue, 03 Aug 2010 09:51:25 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Lai Jiangshan <laijs@...fujitsu.com>
CC:	Marcelo Tosatti <mtosatti@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org
Subject: Re: [PATCH] kvm cleanup: Introduce sibling_pte and do cleanup for
 reverse map and parent_pte

  On 08/03/2010 05:30 AM, Lai Jiangshan wrote:
> This patch is just a big cleanup. it reduces 220 lines of code.
>
> It introduces sibling_pte array for tracking identical sptes, so the
> identical sptes can be linked as a single linked list by their
> corresponding sibling_pte. A reverse map or a parent_pte points at
> the head of this single linked list. So we can do cleanup for
> reverse map and parent_pte VERY LARGELY.
>
> BAD:
>    If most rmap have only one entry or most sp have only one parent,
>    this patch may use more memory than before.

That is the case with NPT and EPT.  Each page has exactly one spte 
(except a few vga pages), and each sp has exactly one parent_pte (except 
the root pages).

> GOOD:
>    1) Reduce a lot of code, The functions which are in hot path becomes
>       very very simple and terrifically fast.
>    2) rmap_next(): O(N) ->  O(1). traveling a ramp: O(N*N) ->  O(N)

The existing rmap_next() is not O(N), it's O(RMAP_EXT), which is 4.  The 
data structure was chosen over a simple linked list to avoid extra cache 
misses.

>    3) Remove the ugly interlayer: struct kvm_rmap_desc, struct kvm_pte_chain

kvm_rmap_desc and kvm_pte_chain are indeed ugly, but they do save a lot 
of memory and cache misses.

>    4) We don't need to allocate any thing when we change the mappings.
>       So we can avoid allocation when we have held kvm mmu spin lock.
>       (this feature is very helpful in future).
>    5) better readability.

I agree the new code is more readable.  Unfortunately it uses more 
memory and is likely to be slower.  You add a cache miss for every spte, 
while kvm_rmap_desc amortizes the cache miss among 4 sptes, and special 
cases 1 spte to have no cache misses (or extra memory requirements).

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ