[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQQjpQEBbdAgMUM7@mtj.duckdns.org>
Date: Fri, 30 Jul 2021 06:07:01 -1000
From: Tejun Heo <tj@...nel.org>
To: brookxu <brookxu.cn@...il.com>
Cc: axboe@...nel.dk, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [PATCH v2] blk-throtl: optimize IOPS throttle for large IO
scenarios
On Fri, Jul 30, 2021 at 10:09:34AM +0800, brookxu wrote:
> >> @@ -877,10 +900,19 @@ static inline void throtl_trim_slice(struct throtl_grp *tg, bool rw)
> >> else
> >> tg->bytes_disp[rw] = 0;
> >>
> >> - if (tg->io_disp[rw] >= io_trim)
> >> + if (tg_io_disp(tg, rw) >= io_trim) {
> >
> > Instead of checking this in multiple places, would it be simpler to transfer
> > the atomic counters to the existing counters whenever we enter blk-throtl
> > and leave the rest of the code as-is?
>
> If we do this, we need to do similar processing on the bio submission path and the bio
> resubmission path in pending_timer. It seems that the code is more complicated?
Yeah, basically whenever we enter blk-throtl. Factored to a function,
calling it on entry should be fairly clean, right? I wonder whether it'd be
better to consolidate all atomic counter handling in a single location and
all it does is transferring whatever's accumulated to the usual counters.
Also, when you're reading & resetting the atomic counters, can you use a
pattern like the following?
main_counter += atomic_xchg(counter, 0);
Right now, there's a race window between reading and resetting.
Thanks.
--
tejun
Powered by blists - more mailing lists