[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190722035042-mutt-send-email-mst@kernel.org>
Date: Mon, 22 Jul 2019 03:52:05 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: "Paul E. McKenney" <paulmck@...ux.ibm.com>
Cc: Matthew Wilcox <willy@...radead.org>, aarcange@...hat.com,
akpm@...ux-foundation.org, christian@...uner.io,
davem@...emloft.net, ebiederm@...ssion.com,
elena.reshetova@...el.com, guro@...com, hch@...radead.org,
james.bottomley@...senpartnership.com, jasowang@...hat.com,
jglisse@...hat.com, keescook@...omium.org, ldv@...linux.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-parisc@...r.kernel.org,
luto@...capital.net, mhocko@...e.com, mingo@...nel.org,
namit@...are.com, peterz@...radead.org,
syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk,
wad@...omium.org
Subject: Re: RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop)
On Sun, Jul 21, 2019 at 04:31:13PM -0700, Paul E. McKenney wrote:
> On Sun, Jul 21, 2019 at 02:08:37PM -0700, Matthew Wilcox wrote:
> > On Sun, Jul 21, 2019 at 06:17:25AM -0700, Paul E. McKenney wrote:
> > > Also, the overhead is important. For example, as far as I know,
> > > current RCU gracefully handles close(open(...)) in a tight userspace
> > > loop. But there might be trouble due to tight userspace loops around
> > > lighter-weight operations.
> >
> > I thought you believed that RCU was antifragile, in that it would scale
> > better as it was used more heavily?
>
> You are referring to this? https://paulmck.livejournal.com/47933.html
>
> If so, the last few paragraphs might be worth re-reading. ;-)
>
> And in this case, the heuristics RCU uses to decide when to schedule
> invocation of the callbacks needs some help. One component of that help
> is a time-based limit to the number of consecutive callback invocations
> (see my crude prototype and Eric Dumazet's more polished patch). Another
> component is an overload warning.
>
> Why would an overload warning be needed if RCU's callback-invocation
> scheduling heurisitics were upgraded? Because someone could boot a
> 100-CPU system with the rcu_nocbs=0-99, bind all of the resulting
> rcuo kthreads to (say) CPU 0, and then run a callback-heavy workload
> on all of the CPUs. Given the constraints, CPU 0 cannot keep up.
>
> So warnings are required as well.
>
> > Would it make sense to have call_rcu() check to see if there are many
> > outstanding requests on this CPU and if so process them before returning?
> > That would ensure that frequent callers usually ended up doing their
> > own processing.
>
> Unfortunately, no. Here is a code fragment illustrating why:
>
> void my_cb(struct rcu_head *rhp)
> {
> unsigned long flags;
>
> spin_lock_irqsave(&my_lock, flags);
> handle_cb(rhp);
> spin_unlock_irqrestore(&my_lock, flags);
> }
>
> . . .
>
> spin_lock_irqsave(&my_lock, flags);
> p = look_something_up();
> remove_that_something(p);
> call_rcu(p, my_cb);
> spin_unlock_irqrestore(&my_lock, flags);
>
> Invoking the extra callbacks directly from call_rcu() would thus result
> in self-deadlock. Documentation/RCU/UP.txt contains a few more examples
> along these lines.
We could add an option that simply fails if overloaded, right?
Have caller recover...
--
MST
Powered by blists - more mailing lists