lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 27 May 2013 16:55:47 +0200
From:	Daniel Vetter <daniel@...ll.ch>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Maarten Lankhorst <maarten.lankhorst@...onical.com>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	dri-devel <dri-devel@...ts.freedesktop.org>,
	"linaro-mm-sig@...ts.linaro.org" <linaro-mm-sig@...ts.linaro.org>,
	rob clark <robclark@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Dave Airlie <airlied@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>,
	"linux-media@...r.kernel.org" <linux-media@...r.kernel.org>
Subject: Re: [PATCH v3 2/3] mutex: add support for wound/wait style locks, v3

On Mon, May 27, 2013 at 4:47 PM, Daniel Vetter <daniel@...ll.ch> wrote:
> On Mon, May 27, 2013 at 10:21 AM, Peter Zijlstra <peterz@...radead.org> wrote:
>> On Wed, May 22, 2013 at 07:24:38PM +0200, Maarten Lankhorst wrote:
>>> >> +static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
>>> >> +                             struct ww_class *ww_class)
>>> >> +{
>>> >> +  ctx->task = current;
>>> >> +  do {
>>> >> +          ctx->stamp = atomic_long_inc_return(&ww_class->stamp);
>>> >> +  } while (unlikely(!ctx->stamp));
>>> > I suppose we'll figure something out when this becomes a bottleneck. Ideally
>>> > we'd do something like:
>>> >
>>> >  ctx->stamp = local_clock();
>>> >
>>> > but for now we cannot guarantee that's not jiffies, and I suppose that's a tad
>>> > too coarse to work for this.
>>> This might mess up when 2 cores happen to return exactly the same time, how do you choose a winner in that case?
>>> EDIT: Using pointer address like you suggested below is fine with me. ctx pointer would be static enough.
>>
>> Right, but for now I suppose the 'global' atomic is ok, if/when we find
>> it hurts performance we can revisit. I was just spewing ideas :-)
>
> We could do a simple
>
> ctx->stamp = (local_clock() << nr_cpu_shift) | local_processor_id()
>
> to work around any bad luck in grabbing the ticket. With sufficient
> fine clocks the bias towards smaller cpu ids would be rather
> irrelevant. Just wanted to drop this idea before I'll forget about it
> again ;-)
Not a good idea to throw around random ideas right after a work-out.
This is broken since different threads could end up with the same low
bits. Comparing ctx pointers otoh on top of the timestamp should work.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ