[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1285824982.5211.675.camel@edumazet-laptop>
Date: Thu, 30 Sep 2010 07:36:22 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Dave Chinner <david@...morbit.com>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 15/17] fs: inode per-cpu last_ino allocator
Le mercredi 29 septembre 2010 à 21:53 -0700, Andrew Morton a écrit :
> On Wed, 29 Sep 2010 22:18:47 +1000 Dave Chinner <david@...morbit.com> wrote:
>
> > From: Eric Dumazet <dada1@...mosbay.com>
> >
Please note my new email address, thanks.
> > last_ino was converted to an atomic variable to allow the inode_lock
> > to go away. However, contended atomics do not scale on large
> > machines, and new_inode() triggers excessive contention in such
> > situations.
> >
> > Solve this problem by providing to each cpu a per_cpu variable,
> > feeded by the shared last_ino, but once every 1024 allocations.
> > This reduces contention on the shared last_ino, and give same
> > spreading ino numbers than before (i.e. same wraparound after 2^32
> > allocations).
> >
> > [npiggin: some extra commenting and use of defines]
> >
> > ...
> >
> > +#ifdef CONFIG_SMP
> > +#define LAST_INO_BATCH 1024
> > +/*
> > + * Each cpu owns a range of LAST_INO_BATCH numbers.
> > + * 'shared_last_ino' is dirtied only once out of LAST_INO_BATCH allocations,
> > + * to renew the exhausted range.
> > + *
> > + * This does not significantly increase overflow rate because every CPU can
> > + * consume at most LAST_INO_BATCH-1 unused inode numbers. So there is
> > + * NR_CPUS*(LAST_INO_BATCH-1) wastage. At 4096 and 1024, this is ~0.1% of the
> > + * 2^32 range, and is a worst-case. Even a 50% wastage would only increase
> > + * overflow rate by 2x, which does not seem too significant.
> > + *
> > + * On a 32bit, non LFS stat() call, glibc will generate an EOVERFLOW
> > + * error if st_ino won't fit in target struct field. Use 32bit counter
> > + * here to attempt to avoid that.
> > + */
> > +static DEFINE_PER_CPU(unsigned int, last_ino);
> > +static atomic_t shared_last_ino;
> > +
> > +static unsigned int last_ino_get(void)
> > +{
> > + unsigned int *p = &get_cpu_var(last_ino);
> > + unsigned int res = *p;
> > +
> > + if (unlikely((res & (LAST_INO_BATCH-1)) == 0))
> > + res = (unsigned int)atomic_add_return(LAST_INO_BATCH,
> > + &shared_last_ino) - LAST_INO_BATCH;
>
> May as well remove the "- LAST_INO_BATCH" there, I think. It'll skew
> the results a tad at startup, but why does that matter?
Because on x86, atomic_add_return(val, ptr) uses xadd() + val
So, using "atomic_add_return(val, ptr) - val" removes one instruction ;)
>
> > + *p = ++res;
> > + put_cpu_var(last_ino);
> > + return res;
> > +}
> > +#else
> > +static unsigned int last_ino_get(void)
> > +{
> > + static unsigned int last_ino;
> > +
> > + return ++last_ino;
> > +}
>
> This is racy with CONFIG_PREEMPT on some architectures, I suspect. I'd
> suggest conversion to atomic_t with, of course, an explanatory comment ;)
>
Thanks, I'll rework the patch !
I am pretty happy to see some interest on this patch serie,
eventually :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists