[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101108231256.GS2715@dastard>
Date: Tue, 9 Nov 2010 10:12:56 +1100
From: Dave Chinner <david@...morbit.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] dio: scale unaligned IO tracking via multiple lists
On Mon, Nov 08, 2010 at 10:36:06AM -0500, Jeff Moyer wrote:
> Dave Chinner <david@...morbit.com> writes:
>
> > From: Dave Chinner <dchinner@...hat.com>
> >
> > To avoid concerns that a single list and lock tracking the unaligned
> > IOs will not scale appropriately, create multiple lists and locks
> > and chose them by hashing the unaligned block being zeroed.
> >
> > Signed-off-by: Dave Chinner <dchinner@...hat.com>
> > ---
> > fs/direct-io.c | 49 ++++++++++++++++++++++++++++++++++++-------------
> > 1 files changed, 36 insertions(+), 13 deletions(-)
> >
> > diff --git a/fs/direct-io.c b/fs/direct-io.c
> > index 1a69efd..353ac52 100644
> > --- a/fs/direct-io.c
> > +++ b/fs/direct-io.c
> > @@ -152,8 +152,28 @@ struct dio_zero_block {
> > atomic_t ref; /* reference count */
> > };
> >
> > -static DEFINE_SPINLOCK(dio_zero_block_lock);
> > -static LIST_HEAD(dio_zero_block_list);
> > +#define DIO_ZERO_BLOCK_NR 37LL
>
> I'm always curious to know how these numbers are derived. Why 37?
It's a prime number large enough to give enough lists to minimise
contention whilst providing decent distribution for 8 byte aligned
addresses with low overhead. XFS uses the same sort of waitqueue
hashing for global IO completion wait queues used by truncation
and inode eviction (see xfs_ioend_wait()).
Seemed reasonable (and simple!) just to copy that design pattern
for another global IO completion wait queue....
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists