lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1357235686.21409.25360.camel@edumazet-glaptop>
Date:	Thu, 03 Jan 2013 09:54:46 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Jan Beulich <JBeulich@...e.com>, Rik van Riel <riel@...hat.com>,
	therbert@...gle.com, walken@...gle.com, jeremy@...p.org,
	tglx@...utronix.de, aquini@...hat.com, lwoodman@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay
 factor

On Thu, 2013-01-03 at 11:45 -0500, Steven Rostedt wrote:
> On Thu, 2013-01-03 at 08:10 -0800, Eric Dumazet wrote:
> 
> > > But then would the problem even exist? If the lock is on its own cache
> > > line, it shouldn't cause a performance issue if other CPUs are spinning
> > > on it. Would it?
> > 
> > Not sure I understand the question.
> > 
> 
> I'll explain my question better.
> 
> I thought the whole point of Rik's patches was to solve a performance
> problem caused by contention on a lock that shares a cache line with
> data.
> 
> In the ideal case, locks wont be contented, and are taken and released
> quickly (being from the RT world, I know this isn't true :-( ). In this
> case, it's also advantageous to keep the lock on the same cache line as
> the data that's being updated. This way, the process of grabbing the
> lock also pulls in the data that you will soon be using.
> 
> But then the problem occurs when you have a bunch of other CPUs trying
> to take this lock in a tight spin. Every time the owner of the lock
> touches the data, the other CPUs doing a LOCK read on the spinlock will
> cause bus contention on the owner CPU as the data shares the cache and
> needs to be synced. As the owner CPU just touched the cache line that is
> under a tight loop of LOCK reads on other CPUs. By adding the delays,
> the CPU with the lock doesn't stall at every update of the data
> protected by the lock.
> 
> Thus, if monitor/mwait is ideal only for locks on its own cache line,
> then they are pointless for the locks that are causing the issue we are
> trying to fix.

I think you misunderstood the monitor/mwait usage I was speaking of

- Only for MCS type lock, where each cpu spins on it own busy/locked
bit.

Of course, if we use a ticket spinlock with no additional storage, we
have to spin without making any memory reference, and thats Rick's patch
using this idea : 

http://www.cs.rochester.edu/research/synchronization/pseudocode/ss.html#ticket



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ