lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Sep 2013 16:21:55 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...icios.com, josh@...htriplett.org,
	niv@...ibm.com, tglx@...utronix.de, rostedt@...dmis.org,
	dhowells@...hat.com, edumazet@...gle.com, darren@...art.com,
	sbw@....edu, cl@...ux.com
Subject: Re: [PATCH] rcu: Is it safe to enter an RCU read-side critical
 section?

On Mon, Sep 09, 2013 at 06:23:43AM -0700, Paul E. McKenney wrote:
> Peter, in the general case, you are quite correct.  But this is a special
> case where it really does work.
> 
> The key point here is that preemption and migration cannot move a task
> from a CPU to which RCU is paying attention to a CPU that RCU is ignoring.

But there's no constraint placed on the migration mask (aka
task_struct::cpus_allowed) and therefore it can move it thusly.

What you're trying to say is that by the time the task is running on
another cpu, that cpu's state will match the state of the previous cpu,
no?

> So yes, by the time the task sees the return value from rcu_is_cpu_idle(),
> that task might be running on some other CPU.  But that is OK, because
> if RCU was paying attention to the old CPU, then RCU must also be paying
> attention to the new CPU.

OK, fair enough.

> Here is an example of how this works:
> 
> 1.	Some task running on a CPU 0 (which RCU is paying attention to)
> 	calls rcu_is_cpu_idle(), which disables preemption, checks the
> 	per-CPU variable, sets ret to zero, then enables preemption.
> 
> 	At this point, the task is preempted by some high-priority task.
> 
> 2.	CPU 1 is currently idle, so RCU is -not- paying attention to it.
> 	However, it is decided that our low-priority task should migrate
> 	to CPU 1.
> 
> 3.	CPU 1 is sent an IPI, which forces this CPU out of idle.  This
> 	causes rcu_idle_exit() to be called, which causes RCU to start
> 	paying attention to CPU 1.
> 

Just a nit, we typically try to avoid using IPIs to wake idle CPUs,
doesn't change the story much though.

> 4.	CPU 1 switches to the low-priority task, which now sees the
> 	return value of rcu_is_cpu_idle().  Now, this return value did
> 	in fact reflect the old state of CPU 0, and the state of CPU 0
> 	might have changed.  (For example, the high-priority task might
> 	have blocked, so that CPU 0 is now idle, which in turn would
> 	mean that RCU is no longer paying attention to it, so that
> 	if rcu_is_cpu_idle() was called right now, it would return
> 	true rather than the false return computed in step 1 above.)
> 
> 5.	But that is OK.  Because of the way RCU and idle interact,
> 	if a call from a given task to rcu_is_cpu_idle() returned false
> 	some time in the past, a call from that same task will also
> 	return false right now.
> 
> So yes, in general it is wrong to disable preemption, grab the value
> of a per-CPU variable, re-enable preemption, and then return the result.
> But there are a number of special cases where it is OK, and this is
> one of them.

Right, worthy of comments though :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ