[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a943f71d-219a-4c9a-aa2a-4be83132df14@default>
Date: Thu, 15 Mar 2012 12:16:04 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@...cle.com>
To: Konrad Wilk <konrad.wilk@...cle.com>, Avi Kivity <avi@...hat.com>
Cc: Akshay Karle <akshay.a.karle@...il.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
ashu tripathi <er.ashutripathi@...il.com>,
nishant gulhane <nishant.s.gulhane@...il.com>,
amarmore2006 <amarmore2006@...il.com>,
Shreyas Mahure <shreyas.mahure@...il.com>,
mahesh mohan <mahesh6490@...il.com>
Subject: RE: [RFC 0/2] kvm: Transcendent Memory (tmem) on KVM
> From: Konrad Rzeszutek Wilk
> Subject: Re: [RFC 0/2] kvm: Transcendent Memory (tmem) on KVM
>
> On Thu, Mar 15, 2012 at 08:01:52PM +0200, Avi Kivity wrote:
> > On 03/15/2012 07:49 PM, Dan Magenheimer wrote:
> > >
> > > The "WasActive" patch (https://lkml.org/lkml/2012/1/25/300)
> > > is intended to avoid the streaming situation you are creating here.
> > > It increases the "quality" of cached pages placed into zcache
> > > and should probably also be used on the guest-side stubs (and/or maybe
> > > the host-side zcache... I don't know KVM well enough to determine
> > > if that would work).
> > >
> > > As Dave Hansen pointed out, the WasActive patch is not yet correct
> > > and, as akpm points out, pageflag bits are scarce on 32-bit systems,
> > > so it remains to be seen if the WasActive patch can be upstreamed.
> > > Or maybe there is a different way to achieve the same goal.
> > > But I wanted to let you know that the streaming issue is understood
> > > and needs to be resolved for some cleancache backends just as it was
> > > resolved in the core mm code.
> >
> > Nice. This takes care of the tail-end of the streaming (the more
> > important one - since it always involves a cold copy). What about the
> > other side? Won't the read code invoke cleancache_get_page() for every
> > page? (this one is just a null hypercall, so it's cheaper, but still
> > expensive).
>
> That is something we should fix - I think it was mentioned in the frontswap
> email thread the need for batching and it certainly seems required as those
> hypercalls aren't that cheap.
And exactly how expensive ARE hypercalls these days? On the first VT/SVN
systems they were tens of thousands of cycles... now they are closer
to sub-thousand are they not? (I remember seeing a graph of hypercall
overhead dropping across generations of CPUs... anybody have a pointer to
a public graph of this?)
One of my favorite papers these days is "When Poll is Better than Interrupt"
(http://static.usenix.org/events/fast12/tech/full_papers/Yang.pdf) which
argues that wasting some CPU cycles doing a busy-wait is often more
efficient than slogging through the Block I/O subsystem to set up
and respond to an interrupt, if the device is fast enough. I wonder if the
same might be true comparing hypercall overhead for tmem vs the path for
KVM to get a page from the host via its normal path?
Ignoring that for now, if excessive hypercalls is a problem, a better
solution than batching may be to modify the Maharashtra approach to
be more like RAMster: Put zcache in the guest-side and treat the
host like a "remote" system.
But let's wait for the Maharashta team to do some measurements first
before we make any assumptions or change any designs...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists