lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Tue, 9 Jul 2013 20:50:24 -0700
From:	Tejun Heo <tj@...nel.org>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Konstantin Khlebnikov <khlebnikov@...nvz.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...e.cz>,
	cgroups@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
	Sha Zhengju <handai.szj@...il.com>, devel@...nvz.org,
	Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH RFC] fsio: filesystem io accounting cgroup

Hello,

On Tue, Jul 09, 2013 at 11:09:55PM -0400, Vivek Goyal wrote:
> Stacking drivers are pretty important ones and we expect throttling
> to work with them. By throttling bio, a single hook worked both for
> request based drivers and bio based drivers.

Oh yeah, sure, we have them working now, so there's no way to break
them but that doesn't mean it's a good overall design.  I don't have a
good answer for this one.  The root cause is having the distinction
between bio and rq based drivers.  With the right constructs, I
suspect we probably could have done away with bio based drivers, but,
well, that's all history now.

> So looks like for bio based drivers you want bio throttling and for
> request based drviers, request throttling and define a separate hook
> in blk_queue_bio(). A generic hook probably can check the type of request
> queue and not throttle bio if it is request based queue and ultimately
> request queue based hook will throttle it.
> 
> So in a cgroup we blkio.throttle.io_serviced will have stats for
> bio/request depending on type of device.
> 
> And we will need to modify throttling logic so that it can handle
> both bio and request throttling. Not sure how much of code can be
> shared for bio/request throttling.

I'm not sure how much (de)multiplexing and sharing we'd be doing but
I'm afraid there's gonna need to be some.  We really can't use the
same logic for SSDs and rotating rusts after all and it probably would
be best to avoid contaminating SSD paths with lots of guesstimating
logics necessary for rotating rusts.

> I am not sure about request based multipath driver and it might
> require some special handling.

If it's not supported now, I'll be happy with just leaving it alone
and telling mp users to configure the underlying queues.

> Is it roughly inline with what you have been thinking.

I'm hoping to keep it somewhat manageable at least.  I wouldn't mind
leaving stacking driver and cfq-iosched support as they are while only
supporting SSD devices with new code.  It's all pie in the sky at this
point and none of this matters before we fix the bdi and writeback
issue anyway.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ