[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080827120553.9c9d6690.akpm@linux-foundation.org>
Date: Wed, 27 Aug 2008 12:05:53 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: "Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc: cmm@...ibm.com, tytso@....edu, sandeen@...hat.com,
linux-ext4@...r.kernel.org, aneesh.kumar@...ux.vnet.ibm.com,
a.p.zijlstra@...llo.nl, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -V3 01/11] percpu_counters: make fbc->count read atomic
on 32 bit architecture
On Wed, 27 Aug 2008 20:58:26 +0530
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com> wrote:
> fbc->count is of type s64. The change was introduced by
> 0216bfcffe424a5473daa4da47440881b36c1f4 which changed the type
> from long to s64. Moving to s64 also means on 32 bit architectures
> we can get wrong values on fbc->count. Since fbc->count is read
> more frequently and updated rarely use seqlocks. This should
> reduce the impact of locking in the read path for 32bit arch.
>
So... yesterday's suggestionm to investigate implementing this at a
lower level wasn't popular?
> include/linux/percpu_counter.h | 28 ++++++++++++++++++++++++----
> lib/percpu_counter.c | 20 ++++++++++----------
> 2 files changed, 34 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
> index 9007ccd..1b711a1 100644
> --- a/include/linux/percpu_counter.h
> +++ b/include/linux/percpu_counter.h
> @@ -6,7 +6,7 @@
> * WARNING: these things are HUGE. 4 kbytes per counter on 32-way P4.
> */
>
> -#include <linux/spinlock.h>
> +#include <linux/seqlock.h>
> #include <linux/smp.h>
> #include <linux/list.h>
> #include <linux/threads.h>
> @@ -16,7 +16,7 @@
> #ifdef CONFIG_SMP
>
> struct percpu_counter {
> - spinlock_t lock;
> + seqlock_t lock;
> s64 count;
> #ifdef CONFIG_HOTPLUG_CPU
> struct list_head list; /* All percpu_counters are on a list */
> @@ -53,10 +53,30 @@ static inline s64 percpu_counter_sum(struct percpu_counter *fbc)
> return __percpu_counter_sum(fbc);
> }
>
> -static inline s64 percpu_counter_read(struct percpu_counter *fbc)
> +#if BITS_PER_LONG == 64
> +static inline s64 fbc_count(struct percpu_counter *fbc)
> {
> return fbc->count;
> }
> +#else
> +/* doesn't have atomic 64 bit operation */
> +static inline s64 fbc_count(struct percpu_counter *fbc)
> +{
> + s64 ret;
> + unsigned seq;
> + do {
> + seq = read_seqbegin(&fbc->lock);
> + ret = fbc->count;
> + } while (read_seqretry(&fbc->lock, seq));
> + return ret;
> +
Please don't put unneeded blank lines into random places.
> +}
> +#endif
This is now too large to be inlined.
> +static inline s64 percpu_counter_read(struct percpu_counter *fbc)
> +{
> + return fbc_count(fbc);
> +}
This change means that a percpu_counter_read() from interrupt context
on a 32-bit machine is now deadlockable, whereas it previously was not
deadlockable on either 32-bit or 64-bit.
This flows on to the lib/proportions.c, which uses
percpu_counter_read() and also does spin_lock_irqsave() internally,
indicating that it is (or was) designed to be used in IRQ contexts.
It means that bdi_stat() can no longer be used from interrupt context.
So a whole lot of thought and review and checking is needed here. It
should all be spelled out in the changelog. This will be a horridly
rare deadlock, so suitable WARN_ON()s should be added to detect when
callers are vulnerable to it.
Or we make the whole thing irq-safe.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists