lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120427194654.GN10579@redhat.com>
Date:	Fri, 27 Apr 2012 15:46:54 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
	linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
	containers@...ts.linux-foundation.org, fengguang.wu@...el.com,
	hughd@...gle.com, akpm@...ux-foundation.org
Subject: Re: [PATCH 11/11] blkcg: implement per-blkg request allocation

On Thu, Apr 26, 2012 at 02:59:21PM -0700, Tejun Heo wrote:

[..]
> @@ -926,6 +936,7 @@ static struct request *__get_request(struct request_list *rl, int rw_flags,
>  		goto fail_alloc;
>  
>  	blk_rq_init(q, rq);
> +	blk_rq_set_rl(rq, rl);

Given the fact that we have established the rq and blkg releation at
the time of allocation, should we modify CFQ to just use that relation
instread of trying to lookup group again based on bio.

We avoid one lookup also we avoid duplicate creation of blkg in following
corner case of bio==NULL

	- blkg_get_rl()
	- request allocation fails. sleep, drop queue lock
	- process is moved to a different cgroup. origincal cgroup is
	  deleted. pre_destroy will cleanup all blkg on blkcg.
	- process wakes up, request allocated, set_request sets up new blkg
 	  based on new cgroup. Now a request is queued in one blkg/cgroup and
 	  it has come out of the quota of other blkg/cgroup.

Well, I have a question. Ideally nobody should be linking any more blkg
to a blkcg once blkg_pre_destroy() has been called? But can it happen
that bio_associate_current() takes are reference to blkcg and bio is
throttled. cgroup associated with bio is deleted resulting in
pre_destroy(). Now bio is submitted to CFQ and it will try to create
a new blkg for blkcg-queue pair and once IO is complete, bio will drop
blkcg reference, which in turn will free up blkcg and associated blkg
is still around and will not be cleaned up.

IOW, looks like we need a mechanism to mark a blkcg dead (set in
pre_destroy() call) and any submissions to blkcg after that should result
in bio being divered to root group?

If we reuse the rl->blkg during CFQ submission, we will avoid that problem
as blkg IO is being submitted has already been disconnected from blkcg
list and hopefully that's not a problem and IO can still be submitted
in this blkg.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ