lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Feb 2012 09:22:33 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Kent Overstreet <koverstreet@...gle.com>, axboe@...nel.dk,
	ctalbott@...gle.com, rni@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 7/9] block: implement bio_associate_current()

On Fri, Feb 17, 2012 at 02:57:35PM -0800, Tejun Heo wrote:
> Hey, Vivek.
> 
> On Fri, Feb 17, 2012 at 05:51:26PM -0500, Vivek Goyal wrote:
> > Otherwise on every IO, we will end up comparing submitting tasks's
> > cgroup and cic/cfqq's cgroup.
> 
> But how much is that different from checking CHANGED bit on each IO?
> I mean, we can just do sth like cfqg->blkg->blkcg == bio_blkcg(bio).
> It isn't expensive.

I guess you will first determine cfqq associated with cic and then do

cfqq->cfqg->blkg->blkcg == bio_blkcg(bio)

One can do that but still does not get rid of requirement of checking
for CGRPOUP_CHANGED as not every bio will have cgroup information stored
and you still will have to check whether submitting task has changed
the cgroup since it last did IO.

> 
> > Also this will create problems, if two threads sharing io context are
> > in two different cgroups. We will frequently end up changing the
> > association.
> 
> blkcg doesn't allow that anyway (it tries but is racy) and I actually
> was thinking about sending a RFC patch to kill CLONE_IO.

I thought CLONE_IO is useful and it allows threads to share IO context.
qemu wanted to use it for its IO threads so that one virtual machine
does not get higher share of disk by just craeting more threads. In fact
if multiple threads are doing related IO, we would like them to use
same io context. Those programs who don't use CLONE_IO (dump utility),
we try to detect closely realted IO in CFQ and try to merge cfq queues.
(effectively trying to simulate shared io context).

Hence, I think CLONE_IO is useful and killing it probably does not buy
us much.

Can we logically say that io_context is owned by thread group leader and
cgroup of io_context changes only if thread group leader changes the
cgroup. So even if some threads are in different cgroup, IO gets accounted
to thread group leaders's cgroup.

So we can store ioc->blkcg association and this association changes when
thread group leader changes cgroup. We can possibly keep CHANGED_CGROUP
also around so that next time old cic->cfqq association is dropped and
a new one is established with new ioc->blkcg.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ