lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Jul 2013 21:24:14 +0800
From:	Shaohua Li <shli@...nel.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
	neilb@...e.de, djbw@...com
Subject: Re: [patch 3/3] raid5: only wakeup necessary threads

On Tue, Jul 30, 2013 at 08:46:55AM -0400, Tejun Heo wrote:
> Hello,
> 
> On Tue, Jul 30, 2013 at 01:52:10PM +0800, shli@...nel.org wrote:
> > If there are no enough stripes to handle, we'd better now always queue all
> > available work_structs. If one worker can only handle small or even none
> > stripes, it will impact request merge and create lock contention.
> > 
> > With this patch, the number of work_struct running will depend on pending
> > stripes number. Not some statistics info used in the patch are accessed without
> > locking protection. Yhis should doesn't matter, we just try best to avoid queue
> > unnecessary work_struct.
> 
> I haven't really followed the code but two general comments.
> 
> * Stacking drivers in general should always try to keep the bios
>   passing through in the same order that they are received.  The order
>   of bios is an important information to the io scheduler and io
>   scheduling will suffer badly if the bios are shuffled by the
>   stacking driver.  It'd probably be a good idea to have a mechanism
>   to keep the issue order intact even when multiple workers are
>   employed.

In the raid5 case, it's very hard to keep the order the bios passed in, because
we need read some disks, calculate parity, and write some disks, the timing
could break any kind of order. Besides the workqueue handles 8 stripes one
time, so I suppose this keeps some order if there is.
 
> * While limiting the number of work_struct dynamically could be
>   beneficial and it's upto Neil, it'd be nice if you can accompany it
>   with some numbers so that whether such optimization actually is
>   worthwhile or not can be decided.  The same goes for the whole
>   series, I suppose.

Sure, I can add the number in next post. Basically if I let 8 worker running
for 7 disks raid5 setup, multi-threading is 4x ~ 5x faster.

Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ