lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201028201554.GE3249@paulmck-ThinkPad-P72>
Date:   Wed, 28 Oct 2020 13:15:54 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...nel.org, linux-kernel@...r.kernel.org, will@...nel.org,
        hch@....de, axboe@...nel.dk, chris@...is-wilson.co.uk,
        davem@...emloft.net, kuba@...nel.org, fweisbec@...il.com,
        oleg@...hat.com, vincent.guittot@...aro.org
Subject: Re: [RFC][PATCH v3 6/6] rcu/tree: Use irq_work_queue_remote()

On Wed, Oct 28, 2020 at 09:02:43PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 28, 2020 at 03:54:28PM +0100, Peter Zijlstra wrote:
> > On Wed, Oct 28, 2020 at 12:07:13PM +0100, Peter Zijlstra wrote:
> > > AFAICT we only need/use irq_work_queue_on() on remote CPUs, since we
> > > can directly access local state.  So avoid the IRQ_WORK dependency and
> > > use the unconditionally available irq_work_queue_remote().
> > > 
> > > This survives a number of TREE01 runs.
> > 
> > OK, Paul mentioned on IRC that while it is extremely unlikely, this code
> > does not indeed guarantee it will not try to IPI self.
> > 
> > I'll try again.
> 
> This is the best I could come up with.. :/
> 
> ---
> Subject: rcu/tree: Use irq_work_queue_remote()
> From: Peter Zijlstra <peterz@...radead.org>
> Date: Wed Oct 28 11:53:40 CET 2020
> 
> All sites that consume rcu_iw_gp_seq seem to have rcu_node lock held,
> so setting it probably should too. Also the effect of self-IPI here
> would be setting rcu_iw_gp_seq to the value we just set it to
> (pointless) and clearing rcu_iw_pending, which we just set, so don't
> set it.
> 
> Passes TREE01.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/rcu/tree.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1308,14 +1308,16 @@ static int rcu_implicit_dynticks_qs(stru
>  			resched_cpu(rdp->cpu);
>  			WRITE_ONCE(rdp->last_fqs_resched, jiffies);
>  		}
> -#ifdef CONFIG_IRQ_WORK
> +		raw_spin_lock_rcu_node(rnp);

The caller of rcu_implicit_dynticks_qs() already holds this lock.
Please see the force_qs_rnp() function and its second call site,
to which rcu_implicit_dynticks_qs() is passed as an argument.

But other than that, this does look plausible.  And getting rid of
that #ifdef is worth something.  ;-)

							Thanx, Paul

>  		if (!rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq &&
>  		    (rnp->ffmask & rdp->grpmask)) {
> -			rdp->rcu_iw_pending = true;
>  			rdp->rcu_iw_gp_seq = rnp->gp_seq;
> -			irq_work_queue_on(&rdp->rcu_iw, rdp->cpu);
> +			if (likely(rdp->cpu != smp_processor_id())) {
> +				rdp->rcu_iw_pending = true;
> +				irq_work_queue_remote(rdp->cpu, &rdp->rcu_iw);
> +			}
>  		}
> -#endif
> +		raw_spin_unlock_rcu_node(rnp);
>  	}
>  
>  	return 0;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ