lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Sep 2011 12:00:34 -0400
From:	Ted Ts'o <tytso@....edu>
To:	torn5 <torn5@...ftmail.org>
Cc:	linux-ext4@...r.kernel.org
Subject: Re: What to put for unknown stripe-width?

On Tue, Sep 20, 2011 at 05:29:34PM +0200, torn5 wrote:
> 
> No this is not correct, for MD at least.
> MD uses strips to compute parity, which are always 4k wide for each
> device. The reads in your example would be 32k read from two
> devices, followed by 32k write to two devices.

Then where the heck are these 1MB numbers that you cited originally
coming from?  If by that you mean the LVM PE size, that doesn't matter
to the file system.  It matters as far as your efficiency of doing LVM
snapshots, but what matters from the perspective of the file system's
stripe-width (stride doesn't really matter with the ext4 flex_bg
layout) is the RAID parameters at the MD level, NOT the LVM
parameters.  (This is assuming LVM is properly aligning its PE's with
the RAID stripe sizes, but I'll assume for the purposes of this
discussion that LVM is competently implemented...)

> So, regarding my original problem, the way you use stride-size in
> ext4 is that you begin every new file at the start of a stripe?

The design goal (not completely implemented at the moment) is that
block allocation will try (but may not succeed if the free space is
fragmented, etc.) to align files to begin at the start of a stripe,
and that file writes will be try to avoid RAID 5 read/modify/write
cycles.

> For growing an existing file what do you do, do you continue to
> write it from where it was, without holes, or you put a hole, select
> a new location at the start of a new stripe and start from there?

We will try to arrange things so that subsequent reads and writes are
well aligned.  So the assumption here is that the application will
also be intelligent enough to do the right thing.  (i.e., if you have
a 32k stripe width, writes need to be 32k aligned and in multiples of
32k at the logical block layer, which the file system will translate
into appropriately aligned reads and writes to the RAID array.)

> Regarding multiple very small files wrote together by pdflush, what
> do they do? They are sticked together on the same stripe without
> holes, or each one goes to a different stripe?

pdflush has no idea about RAID, so for small files it's all up in the
air.

> Is the change of stripe-width with tune2fs supported on a live,
> mounted fs? (I mean maybe with a mount -o remount but no umount)

It's supported, but it's not going to change the layout of the live,
mounted file system.

I'm going to strongly caution you about using fancy-shmancy features
like this MD reshape.  You yourself have said it's not stable.  But do
you really need it?  You haven't said what your application is, but it
may be much better to simply add new disks in multiples of your
existing stripe width, and just keep things simple.

Also, why are you using RAID?  Is it for reliability, or performance,
or both?  Have you seen some of the recent (well, in the last 4-5
years paper about relibility of disks and RAID)?  Especially
interesting comments such as

    "Protecting online data only via RAID 5 today verges on
    professional malpractice"

http://storagemojo.com/2007/02/26/netapp-weighs-in-on-disks/

Regards,

					- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists