[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1ZtyjxKCcV0Hfjn@pc636>
Date: Mon, 24 Oct 2022 12:49:46 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>
Cc: Joel Fernandes <joel@...lfernandes.org>, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-team@...com,
rostedt@...dmis.org, Uladzislau Rezki <urezki@...il.com>
Subject: Re: [PATCH rcu 13/14] workqueue: Make queue_rcu_work() use
call_rcu_flush()
> On Sun, Oct 23, 2022 at 08:36:00PM -0400, Joel Fernandes wrote:
> > Hello,
> >
> > On Wed, Oct 19, 2022 at 6:51 PM Paul E. McKenney <paulmck@...nel.org> wrote:
> > >
> > > From: Uladzislau Rezki <urezki@...il.com>
> > >
> > > call_rcu() changes to save power will slow down RCU workqueue items
> > > queued via queue_rcu_work(). This may not be an issue, however we cannot
> > > assume that workqueue users are OK with long delays. Use
> > > call_rcu_flush() API instead which reverts to the old behavio
> >
> > On ChromeOS, I can see that queue_rcu_work() is pretty noisy and the
> > batching is much better if we can just keep it as call_rcu() instead
> > of call_rcu_flush().
> >
> > Is there really any reason to keep it as call_rcu_flush() ? If I
> > recall, the real reason Vlad's system was slowing down was because of
> > scsi and the queue_rcu_work() conversion was really a red herring.
>
<snip>
*** drivers/acpi/osl.c:
acpi_os_drop_map_ref[401] queue_rcu_work(system_wq, &map->track.rwork);
*** drivers/gpu/drm/i915/gt/intel_execlists_submission.c:
virtual_context_destroy[3653] queue_rcu_work(system_wq, &ve->rcu);
*** fs/aio.c:
free_ioctx_reqs[632] queue_rcu_work(system_wq, &ctx->free_rwork);
*** fs/fs-writeback.c:
inode_switch_wbs[604] queue_rcu_work(isw_wq, &isw->work);
cleanup_offline_cgwb[676] queue_rcu_work(isw_wq, &isw->work);
*** include/linux/workqueue.h:
__printf[446] extern bool queue_rcu_work(struct workqueue_struct *wq, struct rcu_work *rwork);
*** kernel/cgroup/cgroup.c:
css_release_work_fn[5253] queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);
css_create[5384] queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork);
*** kernel/rcu/tree.c:
kfree_rcu_monitor[3192] queue_rcu_work(system_wq, &krwp->rcu_work);
*** net/core/skmsg.c:
sk_psock_drop[852] queue_rcu_work(system_wq, &psock->rwork);
*** net/sched/act_ct.c:
tcf_ct_flow_table_put[355] queue_rcu_work(act_ct_wq, &ct_ft->rwork);
*** net/sched/cls_api.c:
tcf_queue_work[225] return queue_rcu_work(tc_filter_wq, rwork);
<snip>
There are 9 users of the queue_rcu_work() functions. I think there can be
a side effect if we keep it as lazy variant. Please note that i have not
checked all those users.
> There are less than 20 invocations of queue_rcu_work(), so it should
> be possible look through each. The low-risk approach is of course to
> have queue_rcu_work() use call_rcu_flush().
>
> The next approach might be to have a Kconfig option and/or kernel
> boot parameter that allowed a per-system choice.
>
> But it would not hurt to double-check on Android.
>
I did not see such noise but i will come back some data on 5.10 kernel
today.
>
> > Vlad, any thoughts?
> >
At least for the kvfree_rcu() i would like to keep the sync variant, because
we have the below patch that improves bathing:
<snip>
commit 51824b780b719c53113dc39e027fbf670dc66028
Author: Uladzislau Rezki (Sony) <urezki@...il.com>
Date: Thu Jun 30 18:33:35 2022 +0200
rcu/kvfree: Update KFREE_DRAIN_JIFFIES interval
Currently the monitor work is scheduled with a fixed interval of HZ/20,
which is roughly 50 milliseconds. The drawback of this approach is
low utilization of the 512 page slots in scenarios with infrequence
kvfree_rcu() calls. For example on an Android system:
<snip>
Apparently i see it in the "dev" branch only.
--
Uladzislau Rezki
Powered by blists - more mailing lists