[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc734449-7cd8-3a8e-9fdc-d7d854393900@kernel.dk>
Date: Fri, 24 Aug 2018 14:41:07 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Anchal Agarwal <anchalag@...n.com>
Cc: fllinden@...zon.com, Jianchao Wang <jianchao.w.wang@...cle.com>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] blk-wbt: get back the missed wakeup from __wbt_done
On 8/24/18 2:33 PM, Anchal Agarwal wrote:
> On Fri, Aug 24, 2018 at 12:50:44PM -0600, Jens Axboe wrote:
>> On 8/24/18 12:12 PM, Anchal Agarwal wrote:
>>> That's totally fair. As compared to before the patch it was way too high
>>> and my test case wasn't even running due to the thunderign herd issues and
>>> queue re-ordering. Anyways as I also mentioned before 10 times
>>> contention is not too bad since its not really affecting much the number of
>>> files read in my applciation. Also, you are right waking up N tasks seems
>>> plausible.
>>
>> OK, I'm going to take that as a positive response. I'm going to propose
>> the last patch as the final addition in this round, since it does fix a
>> gap in the previous. And I do think that we need to wake as many tasks
>> as can make progress, otherwise we're deliberately running the device at
>> a lower load than we should.
>>
>>> My application is somewhat similar to database workload. It does uses fsync
>>> internally also. So what it does is it creates files of random sizes with
>>> random contents. It stores the hash of those files in memory. During the
>>> test it reads those files back from storage and checks their hashes.
>>
>> How many tasks are running for your test?
>>
>> --
>> Jens Axboe
>>
>>
>
> So there are 128 Concurrent reads/writes happening. Max files written before
> reads start is 16384 and each file is of size 512KB. Does that answer your
> question?
Yes it does, thanks. That's not a crazy amount of tasks or threads.
> BTW, I still have to test the last patch you send but by looking at the patch
> I assumed it will work anyways!
Thanks for the vote of confidence, I'd appreciate if you would give it a
whirl. Your workload seems nastier than what I test with, so would be
great to have someone else test it too.
--
Jens Axboe
Powered by blists - more mailing lists