lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 22 Oct 2015 09:14:32 -0600
From:	Jens Axboe <axboe@...nel.dk>
To:	jason <zhangqing.luo@...cle.com>, Tejun Heo <tj@...nel.org>
Cc:	Guru Anbalagane <guru.anbalagane@...cle.com>,
	Feng Jin <joe.jin@...cle.com>, linux-kernel@...r.kernel.org
Subject: Re: blk-mq: takes hours for scsi scanning finish when thousands of
 LUNs

On 10/22/2015 03:15 AM, jason wrote:
>
>
> On Thursday, October 22, 2015 04:47 PM, Tejun Heo wrote:
>> Hello,
>>
>> On Mon, Oct 19, 2015 at 07:40:13AM -0700, Zhangqing Luo wrote:
>> ....
>> > So every time blk_mq_freeze_queue_start, it runs in this way
>> >
>> > blk_mq_freeze_queue_start
>> > ->percpu_ref_kill->percpu_ref_kill_and_confirm
>> > ->__percpu_ref_switch_to_atomic
>> > ->call_rcu_sched(&ref->rcu,percpu_ref_switch_to_atomic_rcu)
>> >
>> > and blk_mq_freeze_queue_wait blocks on queue->mq_usage_counter
>> > as it is not zero, and wake up by percpu_ref_switch_to_atomic_rcu
>> > after a grace period
>> >
>> >
>> > My question here is why should we change ref to PERCPU at
>> blk_mq_finish_init?
>> > because of this changing, delay appears.
>>
>> Because percpu operation is way cheaper than atomic ones and we want
>> to optimize hot paths (request issue and completion) over cold paths
>> (init and config changes).  That's the whole point of percpu
>> refcnting.
>>
>> The reason why percpu ref starts in atomic mode is to avoid expensive
>> percpu freezing if the queue is created and abandoned in quick
>> succession as SCSI does during LUN scanning.  If percpu freezing is
>> happening during that, the right solution is moving finish_init to
>> late enough point so that percpu switching happens only after it's
>> known that the queue won't be abandoned.
>>
>> Thanks.
>>
> I agree with the optimizing hot paths by cheaper percpu operation,
> but how much does it affect the performance?

A lot, since the queue referencing happens twice per IO. The switch to 
percpu was done to use shared/common code for this, the previous version 
was a handrolled version of that.

> as you know the switching causes delay, when the the LUN  number is
> increasing
> the delay is becoming higher, so do you have any idea
> about the problem?

Tejun already outlined a good solution to the problem:

"If percpu freezing is
happening during that, the right solution is moving finish_init to
late enough point so that percpu switching happens only after it's
known that the queue won't be abandoned."

It'd be great if you could look into that. Your original patch 
demonstrates exactly where the problem is, but it's not something that 
can be applied of course.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ