lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Sep 2014 13:26:38 -0400
From:	TR Reardon <thomas_reardon@...mail.com>
To:	Theodore Ts'o <tytso@....edu>
CC:	"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>
Subject: RE: Reserved GDT inode: blocks vs extents

> Date: Fri, 19 Sep 2014 12:36:49 -0400
> From: tytso@....edu
> To: thomas_reardon@...mail.com
> CC: linux-ext4@...r.kernel.org
> Subject: Re: Reserved GDT inode: blocks vs extents
>
> On Fri, Sep 19, 2014 at 11:54:39AM -0400, TR Reardon wrote:
>> Hello all: there's probably a good reason for this, but I'm wondering why inode#7 (reserved GDT blocks) is always created with a block map rather than extent?
>>
>> [see ext2fs_create_resize_inode()]
>
> It's created using an indirect map because the on-line resizing code
> in the kernel relies on it. It's rather dependent on the structure of
> the indirect block map so that the kernel knows where to fetch the
> necessary blocks in each block group to extend the block group
> descriptor.
>
> So no, we can't change it.
>
> And we do have a solution, namely the meta_bg layout which mostly
> solves the problem, although at the cost of slowing down the mount
> time.
>
> But that may be moot, since one of the things that I've been
> considering is to stop pinning the block group descriptors in memory,
> and just start reading in memory as they are needed. The rationale is
> that for a 4TB disk, we're burning 8 MB of memory. And if you have
> two dozen disks attached to your system, then you're burning 192
> megabytes of memory, which starts to fairly significant amounts of
> memory, especially for bookcase NAS servers.

But I'd argue that in many use cases, in particular bookcase NAS servers, 
ext4+vfs should optimize for avoiding spinups rather than reducing RAM usage. 
Would this change increase spinups when scanning for changes, say via rsync?
For mostly-cold-storage I wish I had the ability to make dentry- and inode-cache 
long lived, and have ext4 prefer to retain directory over file-data cache blocks, 
rather than current non-deterministic behavior via vfs_cache_pressure.  Unfortunately, 
it is precisely the kinds of largefiles on bookcase NAS servers being read linearly 
(and used only once) that blowout the cache of directory blocks (and dentries etc
but it's really the dir blocks that create the problem with spinups on cold-storage)

Of course, it's likelier that I don't actually understand how all these caches work ;)

+Reardon


 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists