lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150615203353.GB13273@gmail.com>
Date:	Mon, 15 Jun 2015 22:33:53 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Oleg Nesterov <oleg@...hat.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Andy Lutomirski <luto@...capital.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Brian Gerst <brgerst@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Borislav Petkov <bp@...en8.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Waiman Long <Waiman.Long@...com>
Subject: Re: [PATCH 02/12] x86/mm/hotplug: Remove pgd_list use from the
 memory hotplug code


* Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:

> On Sun, Jun 14, 2015 at 09:38:25PM +0200, Oleg Nesterov wrote:
> > On 06/14, Oleg Nesterov wrote:
> > >
> > > On 06/14, Ingo Molnar wrote:
> > > >
> > > > * Oleg Nesterov <oleg@...hat.com> wrote:
> > > >
> > > > > > +		spin_lock(&pgd_lock); /* Implies rcu_read_lock() for the task list iteration: */
> > > > >                                          ^^^^^^^^^^^^^^^^^^^^^^^
> > > > >
> > > > > Hmm, but it doesn't if PREEMPT_RCU? No, no, I do not pretend I understand how it
> > > > > actually works ;) But, say, rcu_check_callbacks() can be called from irq and
> > > > > since spin_lock() doesn't increment current->rcu_read_lock_nesting this can lead
> > > > > to rcu_preempt_qs()?
> > > >
> > > > No, RCU grace periods are still defined by 'heavy' context boundaries such as
> > > > context switches, entering idle or user-space mode.
> > > >
> > > > PREEMPT_RCU is like traditional RCU, except that blocking is allowed within the
> > > > RCU read critical section - that is why it uses a separate nesting counter
> > > > (current->rcu_read_lock_nesting), not the preempt count.
> > >
> > > Yes.
> > >
> > > > But if a piece of kernel code is non-preemptible, such as a spinlocked region or
> > > > an irqs-off region, then those are still natural RCU read lock regions, regardless
> > > > of the RCU model, and need no additional RCU locking.
> > >
> > > I do not think so. Yes I understand that rcu_preempt_qs() itself doesn't
> > > finish the gp, but if there are no other rcu-read-lock holders then it
> > > seems synchronize_rcu() on another CPU can return _before_ spin_unlock(),
> > > this CPU no longer needs rcu_preempt_note_context_switch().
> > >
> > > OK, I can be easily wrong, I do not really understand the implementation
> > > of PREEMPT_RCU. Perhaps preempt_disable() can actually act as rcu_read_lock()
> > > with the _current_ implementation. Still this doesn't look right even if
> > > happens to work, and Documentation/RCU/checklist.txt says:
> > >
> > > 11.	Note that synchronize_rcu() -only- guarantees to wait until
> > > 	all currently executing rcu_read_lock()-protected RCU read-side
> > > 	critical sections complete.  It does -not- necessarily guarantee
> > > 	that all currently running interrupts, NMIs, preempt_disable()
> > > 	code, or idle loops will complete.  Therefore, if your
> > > 	read-side critical sections are protected by something other
> > > 	than rcu_read_lock(), do -not- use synchronize_rcu().
> > 
> > 
> > I've even checked this ;) I applied the stupid patch below and then
> > 
> > 	$ taskset 2 perl -e 'syscall 157, 666, 5000' &
> > 	[1] 565
> > 
> > 	$ taskset 1 perl -e 'syscall 157, 777'
> > 
> > 	$
> > 	[1]+  Done                    taskset 2 perl -e 'syscall 157, 666, 5000'
> > 
> > 	$ dmesg -c
> > 	SPIN start
> > 	SYNC start
> > 	SYNC done!
> > 	SPIN done!
> 
> Please accept my apologies for my late entry to this thread.
> Youngest kid graduated from university this weekend, so my
> attention has been elsewhere.

Congratulations! :-)

> If you were to disable interrupts instead of preemption, I would expect
> that the preemptible-RCU grace period would be blocked -- though I am
> not particularly comfortable with people relying on disabled interrupts
> blocking a preemptible-RCU grace period.
> 
> Here is what can happen if you try to block a preemptible-RCU grace
> period by disabling preemption, assuming that there are at least two
> online CPUs in the system:
> 
> 1.	CPU 0 does spin_lock(), which disables preemption.
> 
> 2.	CPU 1 starts a grace period.
> 
> 3.	CPU 0 takes a scheduling-clock interrupt.  It raises softirq,
> 	and the RCU_SOFTIRQ handler notes that there is a new grace
> 	period and sets state so that a subsequent quiescent state on
> 	this CPU will be noted.
> 
> 4.	CPU 0 takes another scheduling-clock interrupt, which checks
> 	current->rcu_read_lock_nesting, and notes that there is no
> 	preemptible-RCU read-side critical section in progress.  It
> 	again raises softirq, and the RCU_SOFTIRQ handler reports
> 	the quiescent state to core RCU.
> 
> 5.	Once each of the other CPUs report a quiescent state, the
> 	grace period can end, despite CPU 0 having preemption
> 	disabled the whole time.
> 
> So Oleg's test is correct, disabling preemption is not sufficient
> to block a preemptible-RCU grace period.

I stand corrected!

> The usual suggestion would be to add rcu_read_lock() just after the lock is 
> acquired and rcu_read_unlock() just before each release of that same lock.  

Will fix it that way.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ