[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1218519323.3964.0.camel@sebastian.kern.oss.ntt.co.jp>
Date: Tue, 12 Aug 2008 14:35:23 +0900
From: Fernando Luis Vázquez Cao
<fernando@....ntt.co.jp>
To: Hirokazu Takahashi <taka@...inux.co.jp>
Cc: balbir@...ux.vnet.ibm.com, xen-devel@...ts.xensource.com,
uchida@...jp.nec.com, containers@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, dm-devel@...hat.com,
agk@...rceware.org, dave@...ux.vnet.ibm.com, ngupta@...gle.com,
righi.andrea@...il.com
Subject: Re: RFC: I/O bandwidth controller
On Fri, 2008-08-08 at 20:39 +0900, Hirokazu Takahashi wrote:
> Hi,
>
> > > Would you like to split up IO into read and write IO. We know that read can be
> > > very latency sensitive when compared to writes. Should we consider them
> > > separately in the RFC?
> > Oops, I somehow ended up leaving your first question unanswered. Sorry.
> >
> > I do not think we should consider them separately, as long as there is a
> > proper IO tracking infrastructure in place. As you mentioned, reads can
> > be very latecy sensitive, but the read case could be treated as an
> > special case IO controller/IO tracking subsystem. There certainly are
> > optimization opportunities. For example, in the synchronous I/O patch ww
> > could mark bios with the iocontext of the current task, because it will
> > happen to be originator of that IO. By effectively caching the ownership
> > information in the bio we can avoid all the accesses to struct page,
> > page_cgroup, etc, and reads would definitively benefit from that.
>
> FYI, we should also take special care of pages being reclaimed, the free
> memory of the cgroup these pages belong to may be really low.
> Dm-ioband is doing this.
Thank you for the heads-up.
- Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists