[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1276700596.6437.16867.camel@nimitz>
Date: Wed, 16 Jun 2010 08:03:16 -0700
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC][PATCH 0/9] rework KVM mmu_shrink() code
On Wed, 2010-06-16 at 11:38 +0300, Avi Kivity wrote:
> On 06/15/2010 04:55 PM, Dave Hansen wrote:
> > These seem to boot and run fine. I'm running about 40 VMs at
> > once, while doing "echo 3> /proc/sys/vm/drop_caches", and
> > killing/restarting VMs constantly.
> >
>
> Will drop_caches actually shrink the kvm caches too? If so we probably
> need to add that to autotest since it's a really good stress test for
> the mmu.
I'm completely sure. I crashed my machines several times this way
during testing.
> > Seems to be relatively stable, and seems to keep the numbers
> > of kvm_mmu_page_header objects down.
> >
>
> That's no necessarily a good thing, those things are expensive to
> recreate. Of course, when we do need to reclaim them, that should be
> efficient.
Oh, I meant that I didn't break the shrinker completely.
> We also do a very bad job of selecting which page to reclaim. We need
> to start using the accessed bit on sptes that point to shadow page
> tables, and then look those up and reclaim unreferenced pages sooner.
> With shadow paging there can be tons of unsync pages that are basically
> unused and can be reclaimed at no cost to future runtime.
Sounds like a good next step.
-- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists