lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Oct 2008 20:20:29 -0400 (EDT)
From:	Nicolas Pitre <nico@....org>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	lkml <linux-kernel@...r.kernel.org>, linux-arch@...r.kernel.org,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	David Miller <davem@...emloft.net>
Subject: Re: [RFC patch 00/15] Tracer Timestamping

On Mon, 20 Oct 2008, Mathieu Desnoyers wrote:

> * Peter Zijlstra (a.p.zijlstra@...llo.nl) wrote:
> > Have you looked at the existing 32->63 extention code in
> > include/linux/cnt32_to_63.h and considered unifying it?
> > 
> 
> Yep, I felt this code was dangerous on SMP given it could suffer from
> the following type of race due to lack of proper barriers :

You are wrong.

> CPU    A                                 B
>        read hw cnt low
>        read __m_cnt_hi
>                                          read hw cnt low
>        (wrap detected)
>        write __m_cnt_hi (incremented)
>                                          read __m_cnt_hi
>                                          (wrap detected)
>                                          write __m_cnt_hi (incremented)
> 
> we therefore increment the high bits twice in the given race.

No.  Either you do the _same_ incrementation twice effectively writing 
the _same_ high value twice, or you don't have simultaneous wrap 
detections.  Simulate it on paper with real numbers if you are not 
convinced.

> On UP, the same race could happen if the code is called with preemption
> enabled.

Wrong again.

> I don't think the "volatile" statement would necessarily make sure the
> compiler and CPU would do the __m_cnt_hi read before the hw cnt low
> read. A real memory barrier to order mmio reads wrt memory reads (or
> instruction sync barrier if the value is taken from the cpu registers)
> would be required to insure such order.

That's easy enough to fix, right?

> I also felt it would be more solid to have per-cpu structures to keep
> track of 32->64 bits TSC updates, given the TSCs can always be slightly
> out-of-sync :

If the low part is a per CPU value then the high part has to be a per 
CPU value too.  There only needs to be a per-CPU variant of the same 
algorithm where the only difference is that __m_cnt_hi would be per CPU.

If the low part is global then __m_cnt_hi has to be global, even on SMP.


Nicolas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists