lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 29 Apr 2011 00:55:18 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	tglx@...utronix.de
Subject: Re: [PATCH] rcu: optimize rcutiny

This time actually adding Thomas to CC...  :-/

							Thanx, Paul

On Fri, Apr 29, 2011 at 12:54:32AM -0700, Paul E. McKenney wrote:
> On Thu, Apr 28, 2011 at 07:23:45AM +0200, Eric Dumazet wrote:
> > rcu_sched_qs() currently calls local_irq_save()/local_irq_restore() up
> > to three times.
> > 
> > Remove irq masking from rcu_qsctr_help() / invoke_rcu_kthread()
> > and do it once in rcu_sched_qs() / rcu_bh_qs()
> > 
> > This generates smaller code as well.
> > 
> > # size kernel/rcutiny.old.o kernel/rcutiny.new.o
> >    text	   data	    bss	    dec	    hex	filename
> >    2314	    156	     24	   2494	    9be	kernel/rcutiny.old.o
> >    2250	    156	     24	   2430	    97e	kernel/rcutiny.new.o
> > 
> > Fix an outdated comment for rcu_qsctr_help()
> > Move invoke_rcu_kthread() definition before its use.
> 
> Looks very nice!  In theory, this does lengthen the time during which
> interrupts are disabled, but in practice I believe that that this would
> not be measurable.  Adding Thomas on CC in case I am mistaken about
> the effect of longer irq-disable regions.
> 
> In the meantime, I have queued this, and either way, thank you, Eric!
> 
> 							Thanx, Paul
> 
> > Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
> > ---
> >  kernel/rcutiny.c |   42 ++++++++++++++++++++----------------------
> >  1 file changed, 20 insertions(+), 22 deletions(-)
> > 
> > diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c
> > index 0c343b9..29eb349 100644
> > --- a/kernel/rcutiny.c
> > +++ b/kernel/rcutiny.c
> > @@ -40,7 +40,6 @@
> >  static struct task_struct *rcu_kthread_task;
> >  static DECLARE_WAIT_QUEUE_HEAD(rcu_kthread_wq);
> >  static unsigned long have_rcu_kthread_work;
> > -static void invoke_rcu_kthread(void);
> > 
> >  /* Forward declarations for rcutiny_plugin.h. */
> >  struct rcu_ctrlblk;
> > @@ -79,36 +78,45 @@ void rcu_exit_nohz(void)
> >  #endif /* #ifdef CONFIG_NO_HZ */
> > 
> >  /*
> > - * Helper function for rcu_qsctr_inc() and rcu_bh_qsctr_inc().
> > - * Also disable irqs to avoid confusion due to interrupt handlers
> > + * Helper function for rcu_sched_qs() and rcu_bh_qs().
> > + * Also irqs are disabled to avoid confusion due to interrupt handlers
> >   * invoking call_rcu().
> >   */
> >  static int rcu_qsctr_help(struct rcu_ctrlblk *rcp)
> >  {
> > -	unsigned long flags;
> > -
> > -	local_irq_save(flags);
> >  	if (rcp->rcucblist != NULL &&
> >  	    rcp->donetail != rcp->curtail) {
> >  		rcp->donetail = rcp->curtail;
> > -		local_irq_restore(flags);
> >  		return 1;
> >  	}
> > -	local_irq_restore(flags);
> > 
> >  	return 0;
> >  }
> > 
> >  /*
> > + * Wake up rcu_kthread() to process callbacks now eligible for invocation
> > + * or to boost readers.
> > + */
> > +static void invoke_rcu_kthread(void)
> > +{
> > +	have_rcu_kthread_work = 1;
> > +	wake_up(&rcu_kthread_wq);
> > +}
> > +
> > +/*
> >   * Record an rcu quiescent state.  And an rcu_bh quiescent state while we
> >   * are at it, given that any rcu quiescent state is also an rcu_bh
> >   * quiescent state.  Use "+" instead of "||" to defeat short circuiting.
> >   */
> >  void rcu_sched_qs(int cpu)
> >  {
> > +	unsigned long flags;
> > +
> > +	local_irq_save(flags);
> >  	if (rcu_qsctr_help(&rcu_sched_ctrlblk) +
> >  	    rcu_qsctr_help(&rcu_bh_ctrlblk))
> >  		invoke_rcu_kthread();
> > +	local_irq_restore(flags);
> >  }
> > 
> >  /*
> > @@ -116,8 +124,12 @@ void rcu_sched_qs(int cpu)
> >   */
> >  void rcu_bh_qs(int cpu)
> >  {
> > +	unsigned long flags;
> > +
> > +	local_irq_save(flags);
> >  	if (rcu_qsctr_help(&rcu_bh_ctrlblk))
> >  		invoke_rcu_kthread();
> > +	local_irq_restore(flags);
> >  }
> > 
> >  /*
> > @@ -208,20 +220,6 @@ static int rcu_kthread(void *arg)
> >  }
> > 
> >  /*
> > - * Wake up rcu_kthread() to process callbacks now eligible for invocation
> > - * or to boost readers.
> > - */
> > -static void invoke_rcu_kthread(void)
> > -{
> > -	unsigned long flags;
> > -
> > -	local_irq_save(flags);
> > -	have_rcu_kthread_work = 1;
> > -	wake_up(&rcu_kthread_wq);
> > -	local_irq_restore(flags);
> > -}
> > -
> > -/*
> >   * Wait for a grace period to elapse.  But it is illegal to invoke
> >   * synchronize_sched() from within an RCU read-side critical section.
> >   * Therefore, any legal call to synchronize_sched() is a quiescent
> > 
> > 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ