[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101016100441.GP4681@dastard>
Date: Sat, 16 Oct 2010 21:04:41 +1100
From: Dave Chinner <david@...morbit.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 03/19] fs: Convert nr_inodes and nr_unused to per-cpu
counters
On Sat, Oct 16, 2010 at 10:29:36AM +0200, Eric Dumazet wrote:
> Le samedi 16 octobre 2010 à 19:13 +1100, Dave Chinner a écrit :
> > From: Dave Chinner <dchinner@...hat.com>
> >
> > The number of inodes allocated does not need to be tied to the
> > addition or removal of an inode to/from a list. If we are not tied
> > to a list lock, we could update the counters when inodes are
> > initialised or destroyed, but to do that we need to convert the
> > counters to be per-cpu (i.e. independent of a lock). This means that
> > we have the freedom to change the list/locking implementation
> > without needing to care about the counters.
> >
> > Based on a patch originally from Eric Dumazet.
> >
> > Signed-off-by: Dave Chinner <dchinner@...hat.com>
> > Reviewed-by: Christoph Hellwig <hch@....de>
> > ---
>
> NACK
>
> Some people believe percpu_counter object is the right answer to such
> distributed counters, because the loop is done on 'online' cpus instead
> of 'possible' cpus. "It must be better if number of possible cpus is
> 4096 and only one or two cpus are online"...
>
> But if we do this loop only on rare events, like
> "cat /proc/sys/fs/inode-nr", then the percpu_counter() is more
> expensive, because percpu_add() _is_ more expensive :
>
> - Its a function call and lot of instructions/cycles per call, while
> this_cpu_inc(nr_inodes) is a single instruction, using no register on
> x86.
>
> - Its possibly accessing a shared spinlock and counter when the percpu
> counter reaches the batch limit.
>
>
> To recap : nr_inodes is not a counter that needs to be estimated in real
> time, since we have not limit on number of inodes in the machine (limit
> is the memory allocator).
>
> Unless someone can prove "cat /proc/sys/fs/inode-nr" must be performed
> thousand of times per second on their setup, the choice I made to scale
> nr_inodes is better over the 'obvious percpu_counter choice'
get_nr_inodes_unused() is called on every single shrinker call. i.e.
for every 128 inodes we attempt to reclaim. Given that I'm seeing
inode reclaim rate in the order of a million per second on a 8p box,
that meets your criteria for using the generic percpu counter
infrastructure.
Also, get_nr_inodes() is also called by get_nr_dirty_inodes(), which is
called by the high level inode writeback code, so will typically be
called in the order of tens of times per second, and the number of
calls increased depending on the number of filesystems that are
active. It's still much higher frequency than your "cat
/proc/sys/fs/inode-nr" example indicates it might be called.
So I'd say that by your reasoning, the choice of using the generic
percpu counters is the right one to make.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists