[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200810121656.05999.nickpiggin@yahoo.com.au>
Date: Sun, 12 Oct 2008 16:56:05 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Andi Kleen <andi@...stfloor.org>, davem@...emloft.net
Cc: Dave Jones <davej@...hat.com>, x86@...nel.org,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: Update cacheline size on X86_GENERIC
On Sunday 12 October 2008 01:01, Andi Kleen wrote:
> > No immediate ideas. Jens probably is a good person to cc. With direct IO
> > workloads, hd_struct should mostly only be touched in partition remapping
> > and IO accounting.
>
> I found it doubtful if grouping the rw and ro members together was a good
> idea.
If all members touched in the fastpath fit into one cacheline, then it
definitely is. Because if you put the rw members in another cacheline,
that line is still going to bounce just the same, but then you are just
going to take up one more cacheline with the ro members.
> > At this point, you would want to cacheline align hd_struct. So if you
>
> The problem is probably not false sharing, but simply cache misses because
> it's so big. I think.
If you line up all the commonly touched members into cacheline boundaries,
then presumably you have to assume the start of the struct has eg. cacheline
alignment. There are situations where you can almost cut down cacheline
footprint of a given data structure by 2x by aligning the items (eg. if we
align struct page to 64 bytes, then random access to mem_map array will be
almost 1/2 the cacheline footprint). I've always been interested in whether
"oltp" benefits from that (eg. define WANT_PAGE_VIRTUAL for x86-64).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists