[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1357227169.10284.32.camel@gandalf.local.home>
Date: Thu, 03 Jan 2013 10:32:49 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Jan Beulich <JBeulich@...e.com>, Rik van Riel <riel@...hat.com>,
therbert@...gle.com, walken@...gle.com, jeremy@...p.org,
tglx@...utronix.de, aquini@...hat.com, lwoodman@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay
factor
On Thu, 2013-01-03 at 05:35 -0800, Eric Dumazet wrote:
> On Thu, 2013-01-03 at 08:24 -0500, Steven Rostedt wrote:
> > On Thu, 2013-01-03 at 09:05 +0000, Jan Beulich wrote:
> >
> > > > How much bus traffic do monitor/mwait cause behind the scenes?
> > >
> > > I would suppose that this just snoops the bus for writes, but the
> > > amount of bus traffic involved in this isn't explicitly documented.
> > >
> > > One downside of course is that unless a spin lock is made occupy
> > > exactly a cache line, false wakeups are possible.
> >
> > And that would probably be very likely, as the whole purpose of Rik's
> > patches was to lower cache stalls due to other CPUs pounding on spin
> > locks that share the cache line of what is being protected (and
> > modified).
>
> A monitor/mwait would be an option only if using MCS (or K42 variant)
> locks, where each cpu would wait on a private and dedicated cache line.
But then would the problem even exist? If the lock is on its own cache
line, it shouldn't cause a performance issue if other CPUs are spinning
on it. Would it?
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists