[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0488a26-bff3-bd92-b5c7-74131161d55e@linux.alibaba.com>
Date: Wed, 25 Nov 2020 22:15:38 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Tejun Heo <tj@...nel.org>
Cc: axboe@...nel.dk, baolin.wang7@...il.com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/7] blk-iocost: Add a flag to indicate if need update hwi
> Hello,
>
> On Tue, Nov 24, 2020 at 11:33:33AM +0800, Baolin Wang wrote:
>> @@ -1445,7 +1447,8 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, bool pay_debt,
>> * after the above debt payment.
>> */
>> ctx.vbudget = vbudget;
>> - current_hweight(iocg, NULL, &ctx.hw_inuse);
>> + if (need_update_hwi)
>> + current_hweight(iocg, NULL, &ctx.hw_inuse);
>
> So, if you look at the implementation of current_hweight(), it's
>
> 1. If nothing has changed, read out the cached values.
> 2. If something has changed, recalculate.
Yes, correct.
>
> and the "something changed" test is single memory read (most likely L1 hot
> at this point) and testing for equality. IOW, the change you're suggesting
> isn't much of an optimization. Maybe the compiler can do a somewhat better
> job of arranging the code and it's a register load than memory load but
> given that it's already a relatively cold wait path, this is unlikely to
> make any actual difference. And that's how current_hweight() is meant to be
> used.
What I want to avoid is the 'atomic_read(&ioc->hweight_gen)' in
current_hweight(), cause this is not a register load and is always a
memory load. But introducing a flag can be cached and more light than a
memory load.
But after thinking more, I think we can just move the
"current_hweight(iocg, NULL, &ctx.hw_inuse);" to the correct place
without introducing new flag to optimize the code. How do you think the
below code?
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index bbe86d1..db29200 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -1413,7 +1413,7 @@ static void iocg_kick_waitq(struct ioc_gq *iocg,
bool pay_debt,
lockdep_assert_held(&iocg->waitq.lock);
- current_hweight(iocg, &hwa, NULL);
+ current_hweight(iocg, &hwa, &ctx.hw_inuse);
vbudget = now->vnow - atomic64_read(&iocg->vtime);
/* pay off debt */
@@ -1428,6 +1428,11 @@ static void iocg_kick_waitq(struct ioc_gq *iocg,
bool pay_debt,
atomic64_add(vpay, &iocg->done_vtime);
iocg_pay_debt(iocg, abs_vpay, now);
vbudget -= vpay;
+ /*
+ * As paying off debt restores hw_inuse, it must be read
after
+ * the above debt payment.
+ */
+ current_hweight(iocg, NULL, &ctx.hw_inuse);
}
if (iocg->abs_vdebt || iocg->delay)
@@ -1446,11 +1451,9 @@ static void iocg_kick_waitq(struct ioc_gq *iocg,
bool pay_debt,
/*
* Wake up the ones which are due and see how much vtime we'll
need for
- * the next one. As paying off debt restores hw_inuse, it must
be read
- * after the above debt payment.
+ * the next one.
*/
ctx.vbudget = vbudget;
- current_hweight(iocg, NULL, &ctx.hw_inuse);
__wake_up_locked_key(&iocg->waitq, TASK_NORMAL, &ctx);
>
> So, I'm not sure this is an improvement. It increases complication without
> actually gaining anything.
>
> Thanks.
>
Powered by blists - more mailing lists