lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <881839960.950383.1289232938613.JavaMail.root@sz0076a.westchester.pa.mail.comcast.net>
Date:	Mon, 8 Nov 2010 16:15:38 +0000 (UTC)
From:	houston.jim@...cast.net
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	"Udo A. Steinberg" <udo@...ervisor.org>,
	Joe Korty <joe.korty@...r.com>,
	mathieu desnoyers <mathieu.desnoyers@...icios.com>,
	dhowells@...hat.com, loic minier <loic.minier@...aro.org>,
	dhaval giani <dhaval.giani@...il.com>, tglx@...utronix.de,
	peterz@...radead.org, linux-kernel@...r.kernel.org,
	josh@...htriplett.org,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH] a local-timer-free version of RCU

Hi Everyone,

I'm sorry started this thread and have not been able to keep up
with the discussion.  I agree that the problems described are real.

> > UAS> PEM> o	CPU 1 continues in rcu_grace_period_complete(),
> > UAS> PEM> incorrectly ending the new grace period.
> > UAS> PEM> 
> > UAS> PEM> Or am I missing something here?
> > UAS> 
> > UAS> The scenario you describe seems possible. However, it should be easily
> > UAS> fixed by passing the perceived batch number as another parameter to
> > UAS> rcu_set_state() and making it part of the cmpxchg. So if the caller
> > UAS> tries to set state bits on a stale batch number (e.g., batch !=
> > UAS> rcu_batch), it can be detected.

My thought on how to fix this case is to only hand off the DO_RCU_COMPLETION
to a single cpu.  The rcu_unlock which receives this hand off would clear its
own bit and then call rcu_poll_other_cpus to complete the process.

> What is scary with this is that it also changes rcu sched semantics, and users
> of call_rcu_sched() and synchronize_sched(), who rely on that to do more
> tricky things than just waiting for rcu_derefence_sched() pointer grace periods,
> like really wanting for preempt_disable and local_irq_save/disable, those
> users will be screwed... :-(  ...unless we also add relevant rcu_read_lock_sched()
> for them...

I need to stare at the code and get back up to speed. I expect that the synchronize_sched
path in my patch is just plain broken.

Jim Houston
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ