lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120420192930.GR22419@redhat.com>
Date:	Fri, 20 Apr 2012 15:29:30 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Fengguang Wu <fengguang.wu@...el.com>
Cc:	Tejun Heo <tj@...nel.org>, Jan Kara <jack@...e.cz>,
	Jens Axboe <axboe@...nel.dk>, linux-mm@...ck.org,
	sjayaraman@...e.com, andrea@...terlinux.com, jmoyer@...hat.com,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	kamezawa.hiroyu@...fujitsu.com, lizefan@...wei.com,
	containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
	ctalbott@...gle.com, rni@...gle.com, lsf@...ts.linux-foundation.org
Subject: Re: [RFC] writeback and cgroup

On Fri, Apr 20, 2012 at 08:45:18PM +0800, Fengguang Wu wrote:

[..]
> If still keep the global async queue, it can run small 40ms slices
> without defeating the flusher's 500ms granularity. After each slice
> it can freely switch to other cgroups with sync IOs, so is free from
> latency issues. After return, it will continue to serve the same
> inode. It will basically be working on behalf of one cgroup for 500ms
> data, working for another cgroup for 500ms data and so on. That
> behavior does not impact fairness, because it's still using small
> slices and its weight is computed system wide thus exhibits some kind
> of smooth/amortize effects over long period of time. It can naturally 
> serve the same inode after return.

Ok, So tejun did say that we will have a switch where we will allow
retaining the old behavior of keeping all async writes in root group
and not in individual group. So throughput sensitive users can make
use of that and there is no need to push proportional IO logic to
writeback layer for buffered writes?

I am personally is not too excited about the case of putting async IO
in separate groups due to the reason that async IO of one group will
start impacting latencies of sync IO of another group and in practice
it might not be desirable. But there are others who have use cases for
separate async IO queue. So as long as switch is there to change the
behavior, I am not too worried.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ