[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFwqC9hF++S-VPHJBFRrqfyNvsvqwzP=Vtzkv8qSYVqLxA@mail.gmail.com>
Date: Thu, 2 Aug 2012 11:08:06 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Josh Triplett <josh@...htriplett.org>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Sasha Levin <levinsasha928@...il.com>,
Tejun Heo <tj@...nel.org>, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
paul.gortmaker@...driver.com
Subject: Re: [RFC 1/4] hashtable: introduce a small and naive hashtable
On Thu, Aug 2, 2012 at 10:59 AM, Josh Triplett <josh@...htriplett.org> wrote:
>
> You shouldn't have any extra indirection for the base, if it lives
> immediately after the size.
Umm. You *always* have the extra indirection. Because you have that allocation.
So you have to follow the pointer to get the base/size, because they
aren't compile/link-time constants.
The cache misses were noticeable in macro-benchmarks, and in
micro-benchmarks the smaller L1 hash table means that things fit much
better in the L2.
It really improved performance. Seriously. Even things like "find /"
that had a lot of L1 misses ended up faster, because "find" is
apparently pretty moronic and does some things over and over. For
stuff that fit in the L1, it qas quite noticeable.
Of course, one reason for the speedup for the dcache was that I also
made the L1 only contain the simple cases (ie no "d_compare" thing
etc), so it speeded up dcache lookups in other ways too. But according
to the profiles, it really looked like better cache behavior was one
of the bigger things.
Trust me: every problem in computer science may be solved by an
indirection, but those indirections are *expensive*. Pointer chasing
is just about the most expensive thing you can do on modern CPU's.
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists