lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1218004892.3950.12.camel@sebastian.kern.oss.ntt.co.jp>
Date:	Wed, 06 Aug 2008 15:41:32 +0900
From:	Fernando Luis Vázquez Cao 
	<fernando@....ntt.co.jp>
To:	Ryo Tsuruta <ryov@...inux.co.jp>
Cc:	dave@...ux.vnet.ibm.com, yoshikawa.takuya@....ntt.co.jp,
	taka@...inux.co.jp, uchida@...jp.nec.com, ngupta@...gle.com,
	linux-kernel@...r.kernel.org, dm-devel@...hat.com,
	containers@...ts.linux-foundation.org,
	virtualization@...ts.linux-foundation.org,
	xen-devel@...ts.xensource.com, agk@...rceware.org,
	righi.andrea@...il.com
Subject: Re: RFC: I/O bandwidth controller

On Wed, 2008-08-06 at 15:18 +0900, Ryo Tsuruta wrote:
> Hi Fernando,
> 
> > This RFC ended up being a bit longer than I had originally intended, but
> > hopefully it will serve as the start of a fruitful discussion.
> 
> Thanks a lot for posting the RFC.
> 
> > *** Goals
> >   1. Cgroups-aware I/O scheduling (being able to define arbitrary
> > groupings of processes and treat each group as a single scheduling
> > entity).
> >   2. Being able to perform I/O bandwidth control independently on each
> > device.
> >   3. I/O bandwidth shaping.
> >   4. Scheduler-independent I/O bandwidth control.
> >   5. Usable with stacking devices (md, dm and other devices of that
> > ilk).
> >   6. I/O tracking (handle buffered and asynchronous I/O properly).
> >
> > The list of goals above is not exhaustive and it is also likely to
> > contain some not-so-nice-to-have features so your feedback would be
> > appreciated.
> 
> I'd like to add the following item to the goals.
> 
>   7. Selectable from multiple bandwidth control policy (proportion,
>      maximum rate limiting, ...) like I/O scheduler.
Yep, makes sense.

> > *** How to move on
> > 
> > As discussed before, it probably makes sense to have both a block layer
> > I/O controller and a elevator-based one, and they could certainly
> > cohabitate. As discussed before, all of them need I/O tracking
> > capabilities so I would like to suggest the plan below to get things
> > started:
> > 
> >   - Improve the I/O tracking patches (see (6) above) until they are in
> > mergeable shape.
> >   - Fix CFQ and AS to use the new I/O tracking functionality to show its
> > benefits. If the performance impact is acceptable this should suffice to
> > convince the respective maintainer and get the I/O tracking patches
> > merged.
> >   - Implement a block layer resource controller. dm-ioband is a working
> > solution and feature rich but its dependency on the dm infrastructure is
> > likely to find opposition (the dm layer does not handle barriers
> > properly and the maximum size of I/O requests can be limited in some
> > cases). In such a case, we could either try to build a standalone
> > resource controller based on dm-ioband (which would probably hook into
> > generic_make_request) or try to come up with something new.
> >   - If the I/O tracking patches make it into the kernel we could move on
> > and try to get the Cgroup extensions to CFQ and AS mentioned before (see
> > (1), (2), and (3) above for details) merged.
> >   - Delegate the task of controlling the rate at which a task can
> > generate dirty pages to the memory controller.
> 
> I agree with your plan.
> We keep bio-cgroup improving and porting to the latest kernel.
Having more users of bio-cgroup would probably help to get it merged, so
we'll certainly send patches as soon as we get our cfq prototype in
shape.

Regards,

Fernando

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ