[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <969389e7-b1bc-0559-6cc9-9461b034a24f@kernel.dk>
Date: Thu, 23 Aug 2018 17:14:57 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Anchal Agarwal <anchalag@...n.com>,
"van der Linden, Frank" <fllinden@...zon.com>,
jianchao.w.wang@...cle.com
Cc: "mlinux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] blk-wbt: get back the missed wakeup from __wbt_done
On 8/23/18 5:03 PM, Jens Axboe wrote:
>> Hi Jens, This patch looks much cleaner for sure as Frank pointed out
>> too. Basically this looks similar to wake_up_nr only making sure that
>> those woken up requests won't get reordered. This does solves the
>> thundering herd issue. However, I tested the patch against my
>> application and lock contention numbers rose to around 10 times from
>> what I had from your last 3 patches. Again this did add to drop in
>> of total files read by 0.12% and rate at which they were read by
>> 0.02% but this is not a very significant drop. Is lock contention
>> worth the tradeoff? I also added missing
>> __set_current_state(TASK_RUNNING) to the patch for testing.
>
> Can you try this variant? I don't think we need a
> __set_current_state() after io_schedule(), should be fine as-is.
>
> I'm not surprised this will raise contention a bit, since we're now
> waking N tasks potentially, if N can queue. With the initial change,
> we'd always just wake one. That is arguably incorrect. You say it's
> 10 times higher contention, how does that compare to before your
> patch?
>
> Is it possible to run something that looks like your workload?
Additionally, is the contention you are seeing the wait queue, or the
atomic counter? When you say lock contention, I'm inclined to think it's
the rqw->wait.lock.
--
Jens Axboe
Powered by blists - more mailing lists