[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c3577a83-9889-c741-bb74-051a6d9a0f61@suse.com>
Date: Wed, 13 Apr 2022 22:08:49 +0300
From: Nikolay Borisov <nborisov@...e.com>
To: Schspa Shi <schspa@...il.com>
Cc: dsterba@...e.cz, clm@...com, dsterba@...e.com,
josef@...icpanda.com, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org, terrelln@...com
Subject: Re: [PATCH v2] btrfs: zstd: use spin_lock in timer callback
On 13.04.22 г. 19:03 ч., Schspa Shi wrote:
> Nikolay Borisov <nborisov@...e.com> writes:
>
>> On 11.04.22 г. 18:55 ч., Schspa Shi wrote:
>>> This is an optimization for fix fee13fe96529 ("btrfs:
>>> correct zstd workspace manager lock to use spin_lock_bh()")
>>> The critical region for wsm.lock is only accessed by the process context and
>>> the softirq context.
>>> Because in the soft interrupt, the critical section will not be preempted by
>>> the
>>> soft interrupt again, there is no need to call spin_lock_bh(&wsm.lock) to turn
>>> off the soft interrupt, spin_lock(&wsm.lock) is enough for this situation.
>>> Changelog:
>>> v1 -> v2:
>>> - Change the commit message to make it more readable.
>>> [1] https://lore.kernel.org/all/20220408181523.92322-1-schspa@gmail.com/
>>> Signed-off-by: Schspa Shi <schspa@...il.com>
>>
>> Has there been any measurable impact by this change? While it's correct it does mean that
>> someone looking at the code would see that in one call site we use plain spinlock and in
>> another a _bh version and this is somewhat inconsistent.
>>
> Yes, it may seem a little confused. but it's allowed to save some
> little peace of CPU times.
> and "static inline void red_adaptative_timer(struct timer_list *t) in
> net/sched/sch_red.c"
> have similar usage.
>
>> What's more I believe this is a noop since when softirqs are executing preemptible() would
>> be false due to preempt_count() being non-0 and in the bh-disabling code
>> in the spinlock we have:
>>
>> /* First entry of a task into a BH disabled section? */
>> 1 if (!current->softirq_disable_cnt) {
>> 167 if (preemptible()) {
>> 1 local_lock(&softirq_ctrl.lock);
>> 2 /* Required to meet the RCU bottomhalf requirements. */
>> 3 rcu_read_lock();
>> 4 } else {
>> 5 DEBUG_LOCKS_WARN_ON(this_cpu_read(softirq_ctrl.cnt));
>> 6 }
>> 7 }
>>
>>
>> In this case we'd hit the else branch.
>
> We won't hit the else branch. because current->softirq_disable_cnt
> won't be zero in the origin case.
>
> __do_softirq(void)
> softirq_handle_begin(void)
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> current->softirq_disable_cnt will be > 0 at this time.
That's only relevant on CONFIG_PREEMPT_RT though, on usual kernels
softirq_handle_being is empty. Furthermore, in case of the non-preempt
rt if preemptible() always returns false this means that even in the
__do_softirq path we'll never increment softirq_disable_cnt. So if
anything this change is only beneficial (theoretically at that in
preempt_rt scenarios).
> ......
> zstd_reclaim_timer_fn(struct timer_list *timer)
> spin_lock_bh(&wsm.lock);
> __local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
> if (!current->softirq_disable_cnt) {
> // this if branch won't hit
> }
>
> softirq_handle_end();
>
> In this case, the "__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);"
> won't do anything useful it only
> increase softirq disable depth and decrease it in
> "__local_bh_enable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);".
>
> So it's safe to replace spin_lock_bh with spin_lock in a timer
> callback function.
>
>
> For the ksoftirqd, it's all the same.
>
Powered by blists - more mailing lists