lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200908101551.08605.knikanth@suse.de>
Date:	Mon, 10 Aug 2009 15:51:07 +0530
From:	Nikanth Karthikesan <knikanth@...e.de>
To:	Mike Snitzer <snitzer@...hat.com>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	Alasdair G Kergon <agk@...hat.com>,
	Kiyoshi Ueda <k-ueda@...jp.nec.com>, dm-devel@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] Allow delaying initialization of queue after allocation

On Saturday 08 August 2009 21:12:40 Mike Snitzer wrote:
> On Sat, Aug 08 2009 at 12:55am -0400,
>
> Nikanth Karthikesan <knikanth@...e.de> wrote:
> > Export a way to delay initializing a request_queue after allocating it.
> > This is needed by device-mapper devices, as they create the queue on
> > device creation time, but they decide whether it would use the elevator
> > and requests only after first successful table load. Only request-based
> > dm-devices use the elevator and requests. Without this either one needs
> > to initialize and free the mempool and elevator, if it was a bio-based
> > dm-device or leave it allocated, as it is currently done.
> >
> > Signed-off-by: Nikanth Karthikesan <knikanth@...e.de>
>
> This patch needed to be refreshed to account for the changes from this
> recent commit: a4e7d46407d73f35d217013b363b79a8f8eafcaa
>
> I've attached a refreshed patch.
>

Thanks.

> Though I still have questions/feedback below.
>
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index 4b45435..5db0772 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -569,12 +571,25 @@ blk_init_queue_node(request_fn_proc *rfn,
> > spinlock_t *lock, int node_id) if (!q)
> >  		return NULL;
> >
> > -	q->node = node_id;
> > -	if (blk_init_free_list(q)) {
> > +	if (blk_init_allocated_queue(q, rfn, lock)) {
> > +		blk_put_queue(q);
> >  		kmem_cache_free(blk_requestq_cachep, q);
> >  		return NULL;
> >  	}
> >
> > +	return q;
> > +}
> > +EXPORT_SYMBOL(blk_init_queue_node);
> > +
> > +int blk_init_allocated_queue(struct request_queue *q, request_fn_proc
> > *rfn, +							 spinlock_t *lock)
> > +{
> > +	int err = 0;
> > +
> > +	err = blk_init_free_list(q);
> > +	if (err)
> > +		goto out;
> > +
> >  	/*
> >  	 * if caller didn't supply a lock, they get per-queue locking with
> >  	 * our embedded lock
> > @@ -598,15 +613,20 @@ blk_init_queue_node(request_fn_proc *rfn,
> > spinlock_t *lock, int node_id) /*
> >  	 * all done
> >  	 */
> > -	if (!elevator_init(q, NULL)) {
> > -		blk_queue_congestion_threshold(q);
> > -		return q;
> > -	}
> > +	err = elevator_init(q, NULL);
> > +	if (err)
> > +		goto free_and_out;
> >
> > -	blk_put_queue(q);
> > -	return NULL;
> > +	blk_queue_congestion_threshold(q);
> > +
> > +	return 0;
> > +
> > +free_and_out:
> > +	mempool_destroy(q->rq.rq_pool);
> > +out:
> > +	return err;
> >  }
> > -EXPORT_SYMBOL(blk_init_queue_node);
> > +EXPORT_SYMBOL(blk_init_allocated_queue);
> >
> >  int blk_get_queue(struct request_queue *q)
> >  {
>
> In the previous code blk_init_queue_node() only called blk_put_queue()
> iff elevator_init() failed.
>
> Why is blk_init_queue_node() now always calling blk_put_queue() on an
> error from blk_init_allocated_queue()?  It could be that
> blk_init_free_list() was what failed and not elevator_init().
>

I think, it was a bug on not calling blk_put_queue() even when 
blk_init_free_list() failed which would be fixed now.


> I'd imagine it is because some callers of blk_init_allocated_queue(),
> e.g. DM, must not have the queue's refcount dropped on failure?  A
> comment on _why_ would really help set the caller's expectations.  Maybe
> at the top of blk_init_allocated_queue()? E.g.:
>
> "It is up to the caller to manage the allocated queue's lifecycle
> relative to blk_init_allocated_queue() failure".  I guess that is
> obvious after having reviewed this but...
>
> Also, a comment that blk_init_allocated_queue()'s mempool_destroy() is
> to "cleanup the mempool allocated via blk_init_free_list()" would help.
>

Will add the comment when I resend the patch.

Thanks for reviewing.

Thanks
Nikanth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ