lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100114183739.GA18435@Krystal>
Date:	Thu, 14 Jan 2010 13:37:39 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier (v5)

* Mathieu Desnoyers (mathieu.desnoyers@...ymtl.ca) wrote:
> * Peter Zijlstra (peterz@...radead.org) wrote:
> > On Thu, 2010-01-14 at 11:26 -0500, Mathieu Desnoyers wrote:
> > 
> > > It's this scenario that is causing problem. Let's consider this
> > > execution:
> > > 
> 
> (slightly augmented)
> 
>        CPU 0 (membarrier)                  CPU 1 (another mm -> our mm)
>        <user-space>
>                                            <kernel-space>
>                                            switch_mm()
>                                              smp_mb()
>                                              clear_mm_cpumask()
>                                              set_mm_cpumask()
>                                              smp_mb() (by load_cr3() on x86)
>                                            switch_to()
>        memory access before membarrier
>        <call sys_membarrier()>
>        smp_mb()
>        mm_cpumask includes CPU 1
>        rcu_read_lock()
>        if (CPU 1 mm != our mm)
>          skip CPU 1.
>        rcu_read_unlock()
>        smp_mb()
>        <return to user-space>
>                                              current = next (1)
>                                            <switch back to user-space>
>                                            urcu read lock()
>                                              read gp
>                                              store local gp (2)
>                                              barrier()
>                                              access critical section data (3)
>        memory access after membarrier
> 
> So if we don't have any memory barrier between (1) and (3), the memory
> operations can be reordered in such a way that CPU 0 will not send IPI
> to a CPU that would need to have it's barrier() promoted into a
> smp_mb().
> 
> > 
> > I'm still not getting it, sure we don't send an IPI, but it will have
> > done an mb() in switch_mm() to become our mm, so even without the IPI it
> > will have executed that mb we were after.
> 
> The augmented race window above shows that it would be possible for (2)
> and (3) to be reordered across the barrier(), and therefore the critical
> section access could spill over a rcu-unlocked state.

To make this painfully clear, I'll reorder the accesses to match that of
the CPU to memory:

       CPU 0 (membarrier)                  CPU 1 (another mm -our mm)
       <user-space>
                                           <kernel-space>
                                           switch_mm()
                                             smp_mb()
                                             clear_mm_cpumask()
                                             set_mm_cpumask()
                                             smp_mb() (by load_cr3() on x86)
                                           switch_to()
                                             <buffered current = next>
                                           <switch back to user-space>
                                           urcu read lock()
                                             access critical section data (3)
       memory access before membarrier
       <call sys_membarrier()>
       smp_mb()
       mm_cpumask includes CPU 1
       rcu_read_lock()
       if (CPU 1 mm != our mm)
         skip CPU 1.
       rcu_read_unlock()
       smp_mb()
       <return to user-space>
       memory access after membarrier
                                             current = next (1) (buffer flush)
                                             read gp
                                             store local gp (2)

This should make the problem a bit more evident. Access (3) is done
outside of the read-side C.S. as far as the userspace synchronize_rcu()
is concerned.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ