[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1356137332.21834.8155.camel@edumazet-glaptop>
Date: Fri, 21 Dec 2012 16:48:52 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: linux-kernel@...r.kernel.org, aquini@...hat.com, walken@...gle.com,
lwoodman@...hat.com, jeremy@...p.org,
Jan Beulich <JBeulich@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay
factor
On Fri, 2012-12-21 at 18:56 -0500, Rik van Riel wrote:
> Argh, the first one had a typo in it that did not influence
> performance with fewer threads running, but that made things
> worse with more than a dozen threads...
>
> Please let me know if you can break these patches.
> ---8<---
> Subject: x86,smp: auto tune spinlock backoff delay factor
> +#define MIN_SPINLOCK_DELAY 1
> +#define MAX_SPINLOCK_DELAY 1000
> +DEFINE_PER_CPU(int, spinlock_delay) = { MIN_SPINLOCK_DELAY };
Using a single spinlock_delay per cpu assumes there is a single
contended spinlock on the machine, or that contended
spinlocks protect the same critical section.
Given that we probably know where the contended spinlocks are, couldnt
we use a real scalable implementation for them ?
A known contended one is the Qdisc lock in network layer. We added a
second lock (busylock) to lower a bit the pressure on a separate cache
line, but a scalable lock would be much better...
I guess there are patent issues...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists