[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVPWPCBiZ3KaxM2ABQuiHhoY2dpZbxRnms3E7RvoMDpgqQ@mail.gmail.com>
Date: Fri, 22 May 2015 21:32:34 +0800
From: Ming Lei <ming.lei@...onical.com>
To: Josh Boyer <jwboyer@...oraproject.org>
Cc: Jens Axboe <axboe@...nel.dk>,
"Linux-Kernel@...r. Kernel. Org" <linux-kernel@...r.kernel.org>,
"Justin M. Forbes" <jforbes@...oraproject.org>,
Jeff Moyer <jmoyer@...hat.com>, Tejun Heo <tj@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
"v4.0" <stable@...r.kernel.org>
Subject: Re: [PATCH 2/2] block: loop: avoiding too many pending per work I/O
On Fri, May 22, 2015 at 8:36 PM, Josh Boyer <jwboyer@...oraproject.org> wrote:
> On Tue, May 5, 2015 at 7:49 AM, Ming Lei <ming.lei@...onical.com> wrote:
>> If there are too many pending per work I/O, too many
>> high priority work thread can be generated so that
>> system performance can be effected.
>>
>> This patch limits the max_active parameter of workqueue as 16.
>>
>> This patch fixes Fedora 22 live booting performance
>> regression when it is booted from squashfs over dm
>> based on loop, and looks the following reasons are
>> related with the problem:
>>
>> - not like other filesyststems(such as ext4), squashfs
>> is a bit special, and I observed that increasing I/O jobs
>> to access file in squashfs only improve I/O performance a
>> little, but it can make big difference for ext4
>>
>> - nested loop: both squashfs.img and ext3fs.img are mounted
>> as loop block, and ext3fs.img is inside the squashfs
>>
>> - during booting, lots of tasks may run concurrently
>>
>> Fixes: b5dd2f6047ca108001328aac0e8588edd15f1778
>> Cc: stable@...r.kernel.org (v4.0)
>> Cc: Justin M. Forbes <jforbes@...oraproject.org>
>> Signed-off-by: Ming Lei <ming.lei@...onical.com>
>
> Did we ever come to conclusion on this and patch 1/2 in the series?
> Fedora has them applied to it's 4.0.y based kernels to fix the
> performance regression we saw, and we're carrying them in rawhide as
> well. I'm curious if these will go into 4.1 or if they're queued at
> all for 4.2?
I saw it queued in for-next branch of block tree, so it should be merged
to 4.2.
>
> josh
>
>> ---
>> drivers/block/loop.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
>> index 3dc1598..1bee523 100644
>> --- a/drivers/block/loop.c
>> +++ b/drivers/block/loop.c
>> @@ -725,7 +725,7 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
>> goto out_putf;
>> error = -ENOMEM;
>> lo->wq = alloc_workqueue("kloopd%d",
>> - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0,
>> + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 16,
>> lo->lo_number);
>> if (!lo->wq)
>> goto out_putf;
>> --
>> 1.9.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe stable" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists