[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181017004419.GA24723@lerouge>
Date: Wed, 17 Oct 2018 02:44:19 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"David S . Miller" <davem@...emloft.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...nel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Mauro Carvalho Chehab <mchehab@...pensource.com>,
rostedt@...dmis.org
Subject: Re: [RFC PATCH 12/30] rcu: Prepare rcu_read_[un]lock_bh() for
handling softirq mask
On Mon, Oct 15, 2018 at 10:28:44PM -0700, Joel Fernandes wrote:
> > diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
> > index f8ec3d4..490358c 100644
> > --- a/crypto/pcrypt.c
> > +++ b/crypto/pcrypt.c
> > @@ -73,12 +73,13 @@ struct pcrypt_aead_ctx {
> > static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > struct padata_pcrypt *pcrypt)
> > {
> > + unsigned int bh;
> > unsigned int cpu_index, cpu, i;
> > struct pcrypt_cpumask *cpumask;
> >
> > cpu = *cb_cpu;
> >
> > - rcu_read_lock_bh();
> > + bh = rcu_read_lock_bh();
> > cpumask = rcu_dereference_bh(pcrypt->cb_cpumask);
> > if (cpumask_test_cpu(cpu, cpumask->mask))
> > goto out;
> > @@ -95,7 +96,7 @@ static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu,
> > *cb_cpu = cpu;
> >
> > out:
> > - rcu_read_unlock_bh();
> > + rcu_read_unlock_bh(bh);
> > return padata_do_parallel(pcrypt->pinst, padata, cpu);
> > }
>
> This complicates the RCU API for -bh and doesn't look pretty at all. Is there
> anything better we can do so we don't have to touch existing readers at all?
Indeed, so I'm going to give up with the idea of converting all the callers
in once, this is unmaintainable anyway. I'll keep the RCU API as is for now,
ie: disable all softirqs, and we'll see later if we need per vector granularity.
Surely that would be too fun to handle, with per vector quiescent states and grace
periods ;-)
>
> Also, I thought softirqs were kind of a thing of the past, and threaded
> interrupts are the more preferred interrupt bottom halves these days,
> especially for -rt. Maybe that was just wishful thinking on my part :-)
We all wish that. I think it was the plan but threaded IRQs involve context
switches and IIUC it's the border that's hard to cross on some performance
measurements.
Powered by blists - more mailing lists