[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.GSO.4.53.0612131422490.5969@compserv1>
Date: Wed, 13 Dec 2006 14:32:17 -0500 (EST)
From: Nikolai Joukov <kolya@...sunysb.edu>
To: Phillip Susi <psusi@....rr.com>
cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [ANNOUNCE] RAIF: Redundant Array of Independent Filesystems
> > Nikolai Joukov wrote:
> > > replication. In case of RAID4 and RAID5-like configurations, RAIF performed
> > > about two times *better* than software RAID and even better than an Adaptec
> > > 2120S RAID5 controller. This is because RAIF is located above file system
> > > caches and can cache parity as normal data when needed. We have more
> > > performance details in a technical report, if anyone is interested.
> >
> > This doesn't make sense to me. You do not want to cache the parity
> > data. It only needs to be used to validate the data blocks when the
> > stripe is read, and after that, you only want to cache the data, and
> > throw out the parity. Caching the parity as well will pollute the cache
> > and thus, should lower performance due to more important data being
> > thrown out.
>
> This happens automatically: unused parity pages are treated as unused
> pages and get reused to cache something else. Also, the parity
> never gets cached if you do not write the data (or recover the data).
> However, if you use the same parity page over and over you do not need to
> fetch it from the disk again.
To avoid confusion here: data recovery is not the only situation when it
is necessary to read the parity. Existing parity is also necessary for
writes that are smaller than the page size.
Nikolai.
---------------------
Nikolai Joukov, Ph.D.
Filesystems and Storage Laboratory
Stony Brook University
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists