lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACVXFVOVXLdgs_tnGHwZ5rLCuPBufg97+iTWv2RPj+NQPViTEQ@mail.gmail.com>
Date:	Mon, 4 May 2015 20:54:37 +0800
From:	Ming Lei <ming.lei@...onical.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	Christoph Hellwig <hch@...radead.org>,
	Jens Axboe <axboe@...nel.dk>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	"Justin M. Forbes" <jforbes@...oraproject.org>,
	Jeff Moyer <jmoyer@...hat.com>, "v4.0" <stable@...r.kernel.org>
Subject: Re: [PATCH v6] block: loop: avoiding too many pending per work I/O

On Sun, May 3, 2015 at 9:52 AM, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Sat, May 02, 2015 at 10:56:20PM +0800, Ming Lei wrote:
>> > Maybe just cap max_active to NR_OF_LOOP_DEVS * 16 or sth?  But idk,
>>
>> It might not work because there are nested loop devices like fedora live CD, and
>> in theory the max_active should have been set as loop's queue depth *
>> nr_loop, otherwise there may be possibility of hanging.
>>
>> So this patch is introduced.
>
> If loop devices can be stacked, regardless of what you do with
> nr_active, it may deadlock.  There needs to be a rescuer per each
> nesting level (or just one per device).  This means that the current
> code is broken.

Yes.

>> > how many concurrent workers are we talking about and why are we
>> > capping per-queue concurrency from worker pool side instead of command
>> > tag side?
>>
>> I think there should be performance advantage to make queue depth a bit more
>> because it can help to make queue pipeline as full. Also queue depth often
>> means how many requests the hardware can queue, and it is a bit different
>> with per-queue concurrency.
>
> I'm not really following.  Can you please elaborate?

In case of loop-mq, a bigger queue_depth often has better performance
when doing read/write page cache in sequential read/write because they
are very quick and better to run them as a batch in one time work function,
but simply deceasing queue depth may hurt performance for this case.

Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ