lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 May 2015 22:46:10 +0800
From:	Ming Lei <ming.lei@...onical.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Jens Axboe <axboe@...nel.dk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Justin M. Forbes" <jforbes@...oraproject.org>,
	Jeff Moyer <jmoyer@...hat.com>,
	Christoph Hellwig <hch@...radead.org>,
	"v4.0" <stable@...r.kernel.org>
Subject: Re: [PATCH 2/2] block: loop: avoiding too many pending per work I/O

On Tue, May 5, 2015 at 9:59 PM, Tejun Heo <tj@...nel.org> wrote:
> On Tue, May 05, 2015 at 07:49:55PM +0800, Ming Lei wrote:
> ...
>> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
>> index 3dc1598..1bee523 100644
>> --- a/drivers/block/loop.c
>> +++ b/drivers/block/loop.c
>> @@ -725,7 +725,7 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
>>               goto out_putf;
>>       error = -ENOMEM;
>>       lo->wq = alloc_workqueue("kloopd%d",
>> -                     WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0,
>> +                     WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 16,
>
> It's a bit weird to hard code this to 16 as this effectively becomes a
> hidden bottleneck for concurrency.  For cases where 16 isn't a good
> value, hunting down what's going on can be painful as it's not visible
> anywhere.  I still think the right knob to control concurrency is
> nr_requests for the loop device.  You said that for linear IOs, it's
> better to have higher nr_requests than concurrency but can you
> elaborate why?

I mean, in case of sequential IO, the IO may hit page cache a bit easier,
so handling the IO may be quite quick, then it is often more efficient to
handle them in one same context(such as, handle one by one from IO
queue) than from different contexts(scheduled from different worker
threads). And that can be made by setting a bigger nr_requests(queue_depth).

Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ