[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131011203017.GA29576@amt.cnet>
Date: Fri, 11 Oct 2013 17:30:17 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Gleb Natapov <gleb@...hat.com>
Cc: Xiao Guangrong <xiaoguangrong.eric@...il.com>,
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
avi.kivity@...il.com, pbonzini@...hat.com,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v2 12/15] KVM: MMU: allow locklessly access shadow page
table out of vcpu thread
On Fri, Oct 11, 2013 at 08:38:31AM +0300, Gleb Natapov wrote:
> > n_max_mmu_pages is not a suitable limit to throttle freeing of pages via
> > RCU (its too large). If the free memory watermarks are smaller than
> > n_max_mmu_pages for all guests, OOM is possible.
> >
> Ah, yes. I am not saying n_max_mmu_pages will throttle RCU, just saying
> that slab size will be bound, so hopefully shrinker will touch it
> rarely.
>
> > > > > and, in addition, page released to slab is immediately
> > > > > available for allocation, no need to wait for grace period.
> > > >
> > > > See SLAB_DESTROY_BY_RCU comment at include/linux/slab.h.
> > > >
> > > This comment is exactly what I was referring to in the code you quoted. Do
> > > you see anything problematic in what comment describes?
> >
> > "This delays freeing the SLAB page by a grace period, it does _NOT_
> > delay object freeing." The page is not available for allocation.
> By "page" I mean "spt page" which is a slab object. So "spt page"
> AKA slab object will be available fo allocation immediately.
The object is reusable within that SLAB cache only, not the
entire system (therefore it does not prevent OOM condition).
OK, perhaps it is useful to use SLAB_DESTROY_BY_RCU, but throttling
is still necessary, as described in the RCU documentation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists