[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101219170646.482.qmail@science.horizon.com>
Date: 19 Dec 2010 12:06:46 -0500
From: "George Spelvin" <linux@...izon.com>
To: linux@...izon.com, npiggin@...il.com
Cc: bharrosh@...asas.com, linux-arch@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: Big git diff speedup by avoiding x86 "fast string" memcmp
> First, a byte-by-byte strcpy_from_user of the whole name string to
> kernel space. Then a byte-by-byte chunking and hashing component
> paths according to '/'. Then a byte-by-byte memcmp against the
> dentry name.
Well, you've put your finger on the obvious place to to the aligned copy,
if you want. Looking into it, I note that __d_lookup() does a 32-bit
hash compare before doing a string compare, so the memcmp should almost
always succeed. (Failures are due to hash collisions and rename races.)
> I'd love to do everything with 8 byte loads, do the component
> separation and hashing at the same time as copy from user, and
> have the padded and aligned component strings and their hash
> available... but complexity.
It actually doesn't seem that difficult to do in fs/namei.c:do_getname
via a heavily hacked strncpy_from_user. Call it strhash_from_user, and
it copies the string, while accumulating the hash and the length,
until it hits a nul or /. It has to return the length, the hash,
and the termination condition: nul, /, or space exhausted.
Then you could arrange for each component to be padded so that it
starts aligned.
(The hash and length can be stored directly in the qstr. The length is
only otherwise needed for components that end in /, so you could return
a magic negative length in those cases.)
If it matters, Bob Jenkins wrote a "one-at-a-time" hash function
that is actually sligthy less work per character than the current
partial_name_hash. (Two shifts, two adds and a *11 gets turned into
two shifts, two adds, and and an XOR)
static inline u32 __attribute__((const))
partial_name_hash(unsigned char c, u32 hash)
{
hash += c;
hash += hash << 10;
hash ^= hash >> 6;
return hash;
}
static inline u32 end_name_hash(u32 hash)
{
hash += hash << 3;
hash ^= hash >> 11;
hash += hash << 15;
return hash;
}
(I note that there's zero reason for the current partial_name_hash to
use an unsigned long intermediate value, since the high 32 bits have no
effect on the final result. It just forces unnecessary REX prefixes.)
The main problem with initializing all the qstr structures early is that
it can lead to an oddly dynamic PATH_MAX and/or a type of kernel
DOS by passing in paths with many short directory names that would need
maximal padding.
> On my Westmere system, time to do a stat is 640 cycles plus 10
> cycles for every byte in the string (this cost holds perfectly
> from 1 byte name up to 32 byte names in my test range).
> `git diff` average path name strings are 31 bytes, although this
> is much less cache friendly, and over several components (my
> test is just a single component).
Thank you for this useful real-world number!
I hadn't realized the base cost was so low.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists