[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090126133207.GC13567@elte.hu>
Date: Mon, 26 Jan 2009 14:32:07 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Frederic Weisbecker <fweisbec@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Mandeep Singh Baines <msb@...gle.com>
Subject: Re: [RFC][PATCH 2/2] add a counter for writers spinning on a rwlock
* Frederic Weisbecker <fweisbec@...il.com> wrote:
> This patch adds a counter for writers that enter a rwlock slow path. For
> example it can be useful for slow background tasks which perform some
> jobs on the tasklist, such as the hung_task detector
> (kernel/hung_task.c).
>
> It adds a inc/dec pair on the slow path and 4 bytes for each rwlocks, so
> the overhead is not null.
>
> Only x86 is supported for now, writers_spinning_lock() will return 0 on
> other archs (which is perhaps not a good idea).
>
> Comments?
hm, it increases the rwlock data type:
> diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
> index 845f81c..163e6de 100644
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -13,6 +13,7 @@ typedef struct raw_spinlock {
>
> typedef struct {
> unsigned int lock;
> + unsigned int spinning_writers;
> } raw_rwlock_t;
that's generally not done lightly. Performance figures for a relevant
workload are obligatory in this case - proving that it's worth the size
bloat.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists