lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120413203758.GJ26383@redhat.com>
Date:	Fri, 13 Apr 2012 16:37:58 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	axboe@...nel.dk, ctalbott@...gle.com, rni@...gle.com,
	linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
	containers@...ts.linux-foundation.org
Subject: Re: [PATCH 07/11] blkcg: make request_queue bypassing on allocation

On Fri, Apr 13, 2012 at 04:32:05PM -0400, Vivek Goyal wrote:
> On Fri, Apr 13, 2012 at 01:11:31PM -0700, Tejun Heo wrote:
> > With the previous change to guarantee bypass visiblity for RCU read
> > lock regions, entering bypass mode involves non-trivial overhead and
> > future changes are scheduled to make use of bypass mode during init
> > path.  Combined it may end up adding noticeable delay during boot.
> > 
> > This patch makes request_queue start its life in bypass mode, which is
> > ended on queue init completion at the end of
> > blk_init_allocated_queue(), and updates blk_queue_bypass_start() such
> > that draining and RCU synchronization are performed only when the
> > queue actually enters bypass mode.
> > 
> > This avoids unnecessarily switching in and out of bypass mode during
> > init avoiding the overhead and any nasty surprises which may step from
> > leaving bypass mode on half-initialized queues.
> 
> Tejun, I am not sure that this will fix the problem completely. I think
> we will still face the overhead of synchronize_rcu() in
> blkcg_deactivate_policy() as it will be called from cfq_exit_queue() for
> initialized queues.
> 
> In the past I had I had used synchronize_rcu() in cfq_exit_queue() and
> noticed the overhead. Looks like driver was creating fully initialized
> queues and tearing these apart soon.

Here is the old commit. Driver in question was megaraid.

commit bb729bc98c0f3e6a898d8730df3e2830bf68751a
Author: Jens Axboe <jens.axboe@...cle.com>
Date:   Sun Dec 6 09:54:19 2009 +0100

    cfq-iosched: use call_rcu() instead of doing grace period stall on queue exit

    After the merge of the IO controller patches, booting on my megaraid box
    ran much slower. Vivek Goyal traced it down to megaraid discovery creating
    tons of devices, each suffering a grace period when they later kill that
    queue (if no device is found).
    
    So lets use call_rcu() to batch these deferred frees, instead of taking
    the grace period hit for each one.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ