[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120413205501.GL26383@redhat.com>
Date: Fri, 13 Apr 2012 16:55:01 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
containers@...ts.linux-foundation.org
Subject: Re: [PATCH 07/11] blkcg: make request_queue bypassing on allocation
On Fri, Apr 13, 2012 at 01:47:10PM -0700, Tejun Heo wrote:
> On Fri, Apr 13, 2012 at 04:44:46PM -0400, Vivek Goyal wrote:
> > On Fri, Apr 13, 2012 at 01:37:26PM -0700, Tejun Heo wrote:
> >
> > [..]
> > > blk_cleanup_queue() doesn't use blk_queue_bypass_start() to enter
> > > bypass mode.
> >
> > Oh now elevator_exit() has been moved into blk_release_queue(). But
> > problem will still be there, isn't it? During driver init, most likely driver
> > is holding last reference of the queue and blk_release_queue() will be called
> > in the context of blk_cleanup_queue() causing the overhead?
>
> Hmmm? blk_cleanup_queue() will put the queue into bypassing mode
> without going through synchronize_rcu() and all the following
> bypassing operations just inc/decs bypass_depth without any draining
> operation.
Ok, this is non-obivirious part. The very reason you could do the
optimzation of synchronize_rcu(), was the fact that either somebody
has already called synchronize_rcu() (first caller of blk_bypass_start())
or we know that it is not needed (as we are instanciating queue and no
IO could be going on).
But neither seems to be the case here. So to make sure that blkg_lookup()
under rcu will see the updated value of queue flag (bypass), are we
relying on the fact that caller should see the DEAD flag and not go
ahead with blkg_lookup()? If yes, atleast it is not obivious.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists