[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081011131115.GB12131@one.firstfloor.org>
Date: Sat, 11 Oct 2008 15:11:15 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Andi Kleen <andi@...stfloor.org>, Dave Jones <davej@...hat.com>,
x86@...nel.org, Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: Update cacheline size on X86_GENERIC
> > > That would be nice. It would be interesting to know what is causing
> > > the slowdown.
> >
> > At least that test is extremly cache footprint sensitive. A lot of the
> > cache misses are surprisingly in hd_struct, because it runs
> > with hundred of disks and each needs hd_struct references in the fast path.
> > The recent introduction of fine grained per partition statistics
> > caused a large slowdown. But I don't think kernel workloads
> > are normally that extremly cache sensitive.
>
> That's interesting. struct device is pretty big. I wonder if fields
Yes it is (it actually can be easily shrunk -- see willy's recent
patch to remove the struct completion from knodes), but that won't help
because it will always
be larger than a cache line and it's in the middle, so the
accesses to first part of it and last part of it will be separate.
> couldn't be rearranged to minimise the fastpath cacheline footprint?
> I guess that's already been looked at?
Yes, but not very intensively. So far I was looking for more
detailed profiling data to see the exact accesses.
Of course if you have any immediate ideas that could be tried too.
-Andi
--
ak@...ux.intel.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists