lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.63.0607312133080.15179@qynat.qvtvafvgr.pbz>
Date:	Mon, 31 Jul 2006 21:41:02 -0700 (PDT)
From:	David Lang <dlang@...italinsight.com>
To:	David Masover <ninja@...phack.com>
cc:	tdwebste2@...oo.com, Theodore Tso <tytso@....edu>,
	Nate Diller <nate.diller@...il.com>,
	Adrian Ulrich <reiser4@...nkenlights.ch>,
	"Horst H. von Brand" <vonbrand@....utfsm.cl>, ipso@...ppymail.ca,
	reiser@...esys.com, lkml@...productions.com, jeff@...zik.org,
	linux-kernel@...r.kernel.org, reiserfs-list@...esys.com
Subject: Re: Solaris ZFS on Linux [Was: Re: the " 'official' point of 
 view"expressed by kernelnewbies.org regarding reiser4 inclusion]

On Mon, 31 Jul 2006, David Masover wrote:

>> And perhaps a
>> really good clustering filesystem for markets that
>> require NO downtime. 
>
> Thing is, a cluster is about the only FS I can imagine that could reasonably 
> require (and MAYBE provide) absolutely no downtime. Everything else, the more 
> you say it requires no downtime, the more I say it requires redundancy.
>
> Am I missing any more obvious examples where you can't have enough 
> redundancy, but you can't have downtime either?

just becouse you have redundancy doesn't mean that your data is idle enough for 
you to run a repacker with your spare cycles. to run a repacker you need a time 
when the chunk of the filesystem that you are repacking is not being accessed or 
written to. it doesn't matter if that data lives on one disk or 9 disks all 
mirroring the same data, you can't just break off 1 of the copies and repack 
that becouse by the time you finish it won't match the live drives anymore.

database servers have a repacker (vaccum), and they are under tremendous 
preasure from their users to avoid having to use it becouse of the performance 
hit that it generates. (the theory in the past is exactly what was presented in 
this thread, make things run faster most of the time and accept the performance 
hit when you repack). the trend seems to be for a repacker thread that runs 
continuously, causing a small impact all the time (that can be calculated into 
the capacity planning) instead of a large impact once in a while.

the other thing they are seeing as new people start useing them is that the 
newbys don't realize they need to do somthing as archaic as running a repacker 
periodicly, as a result they let things devolve down to where performance is 
really bad without understanding why.

David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ