lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C188D8B.40508@redhat.com>
Date:	Wed, 16 Jun 2010 11:38:35 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
CC:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC][PATCH 0/9] rework KVM mmu_shrink() code

On 06/15/2010 04:55 PM, Dave Hansen wrote:
> This is a big RFC for the moment.  These need a bunch more
> runtime testing.
>
> --
>
> We've seen contention in the mmu_shrink() function.

First of all, that's surprising.  I tried to configure the shrinker so 
it would stay away from kvm unless memory was really tight.  The reason 
is that kvm mmu pages can cost as much as 1-2 ms of cpu time to build, 
perhaps even more, so we shouldn't drop them lightly.

It's certainly a neglected area that needs attention, though.

> This patch
> set reworks it to hopefully be more scalable to large numbers
> of CPUs, as well as large numbers of running VMs.
>
> The patches are ordered with increasing invasiveness.
>
> These seem to boot and run fine.  I'm running about 40 VMs at
> once, while doing "echo 3>  /proc/sys/vm/drop_caches", and
> killing/restarting VMs constantly.
>    

Will drop_caches actually shrink the kvm caches too?  If so we probably 
need to add that to autotest since it's a really good stress test for 
the mmu.

> Seems to be relatively stable, and seems to keep the numbers
> of kvm_mmu_page_header objects down.
>    

That's no necessarily a good thing, those things are expensive to 
recreate.  Of course, when we do need to reclaim them, that should be 
efficient.

We also do a very bad job of selecting which page to reclaim.  We need 
to start using the accessed bit on sptes that point to shadow page 
tables, and then look those up and reclaim unreferenced pages sooner.  
With shadow paging there can be tons of unsync pages that are basically 
unused and can be reclaimed at no cost to future runtime.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ