[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200316144535.GA501@lenoir>
Date: Mon, 16 Mar 2020 15:45:36 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: mutt@...lmck-ThinkPad-P72, rcu@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel-team@...com, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org
Subject: Re: [PATCH RFC tip/core/rcu 0/16] Prototype RCU usable from idle,
exception, offline
On Fri, Mar 13, 2020 at 08:42:43AM -0700, Paul E. McKenney wrote:
> On Fri, Mar 13, 2020 at 03:41:46PM +0100, Frederic Weisbecker wrote:
> > On Thu, Mar 12, 2020 at 11:16:18AM -0700, Paul E. McKenney wrote:
> > > Hello!
> > >
> > > This series provides two variants of Tasks RCU, a rude variant inspired
> > > by Steven Rostedt's use of schedule_on_each_cpu(), and a tracing variant
> > > requested by the BPF folks and perhaps also of use for other tracing
> > > use cases.
> > >
> > > The tracing variant has explicit read-side markers to permit finite grace
> > > periods even given in-kernel loops in PREEMPT=n builds It also protects
> > > code in the idle loop, on exception entry/exit paths, and on the various
> > > CPU-hotplug online/offline code paths, thus having protection properties
> > > similar to SRCU. However, unlike SRCU, this variant avoids expensive
> > > instructions in the read-side primitives, thus having read-side overhead
> > > similar to that of preemptible RCU.
> > >
> > > There are of course downsides. The grace-period code can send IPIs to
> > > CPUs, even when those CPUs are in the idle loop or in nohz_full userspace.
> > > It is necessary to scan the full tasklist, much as for Tasks RCU. There
> > > is a single callback queue guarded by a single lock, again, much as for
> > > Tasks RCU. If needed, these downsides can be at least partially remedied
> >
> > So what we trade to fix the issues we are having with tracing against extended
> > grace periods, we lose in CPU isolation. That worries me a bit as tracing can
> > be thoroughly used with nohz_full and CPU isolation.
>
> First, disturbing nohz_full CPUs can be avoided by the sysadm simply
> refusing to remove tracepoints while sensitive applications are running
> on nohz_full CPUs.
So, in that case we'll need to modify the tools such as perf tools to avoid
releasing the related buffers until we are ready to do so.
That's possible but it's kindof an ABI breakage. Also what if there is a
long running service on that nohz full CPU polling on the networking card...
Thanks.
Powered by blists - more mailing lists