[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070914093210.GB27479@luba.cern.ch>
Date: Fri, 14 Sep 2007 11:32:10 +0200
From: KELEMEN Peter <Peter.Kelemen@...n.ch>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Bruce Allen <ballen@...vity.phys.uwm.edu>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: ECC and DMA to/from disk controllers
* Alan Cox (alan@...rguk.ukuu.org.uk) [20070910 14:54]:
Alan,
Thanks for your interest (and Bruce, for posting).
> - The ECC level on the drive processors and memory cache vary
> by vendor. Good luck getting any information on this although
> maybe if you are Cern sized they will talk
Do you have any contacts? We're in contact directly with the
system integrators only, not the drive manufacturers.
> The next usual mess is network transfers. [...]
All our data is based on system-local probes (i.e. no network
involved).
> Type III wrong block on PATA fits with the fact the block number
> isn't protected and also the limits on the cache quality of
> drives/drive firmware bugs.
Thanks, it's new information. I was planning to extend fsprobe
with locality information inside the buffers so that we can catch
this as it is happening.
> For drivers/ide there are *lots* of problems with error handling
> so that might be implicated (would want to do old v new ide
> tests on the same h/w which would be very intriguing).
We tried to “force” these corruptions out from their hiding
places on targeted systems, but we failed miserably. Currently we
can't reproduce the issue at will, even on the affected systems.
> Stale data from disk cache I've seen reported, also offsets from
> FIFO hardware bugs (The LOTR render farm hit the latter and had
> to avoid UDMA to avoid a hardware bug)
That's interesting, I'll think about how to expose this.
Currently a single pass writes data only once, so I don't think
any chunk can live hours long in the drives' cache.
> Chunks of zero sounds like caches again, would be interesting to
> know what hardware changes occurred at the point they began to
> pop up and what software.
They seem to be popping more frequently on ARECA-based boxes. The
“software” is a running target as we gradually upgrade the
computer center.
> We also see chipset bugs under high contention some of which
> are explained and worked around (VIA ones in the past), others
> we see are clear correlations - eg between Nvidia chipsets and
> Silicon Image SATA controllers.
Most of our workhorses are 3ware controllers, the CPU nodes
usually have Intel SATA chips.
The fsprobe utility we run in the background on practically all
our boxes is available at http://cern.ch/Peter.Kelemen/fsprobe/ .
We have it deployed on several thousand machines to gather data.
I know that some other HEP institutes looked at it, but I have no
information on who's running it on how many boxes, let alone what
it found. I would be very much interested in whatever findings
people have.
Peter
--
.+'''+. .+'''+. .+'''+. .+'''+. .+''
Kelemen Péter / \ / \ Peter.Kelemen@...n.ch
.+' `+...+' `+...+' `+...+' `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists