lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110105202103.GM2959@thunk.org>
Date:	Wed, 5 Jan 2011 15:21:03 -0500
From:	Ted Ts'o <tytso@....edu>
To:	Andreas Dilger <adilger.kernel@...ger.ca>
Cc:	Ext4 Developers List <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH 6/6] ext4: Dynamically allocate the jbd2_inode in
 ext4_inode_info as necessary

On Wed, Jan 05, 2011 at 12:26:33PM -0700, Andreas Dilger wrote:
> 
> How does this change impact the majority of users that are running
> with a journal?  It is clearly a win for a small percentage of users
> with no-journal mode, but it may be a net increase in memory usage
> for the majority of the users (with journal).  There will now be two
> allocations for every inode, and the extra packing these allocations
> into slabs will increase memory usage for an inode, and would
> definitely result in more allocation/freeing overhead.
> 
> The main question is how many files are ever opened for write?

Even if we do two allocations for every inode (not just inodes opened
for write), it's a win simply because moving the jinode out the
ext4_inode_info structure shrinks it sufficiently that we can now pack
18 inodes in a 16k slab on x86_64.  It turns out that the slab
allocator is pretty inefficient at large data structures, and smaller
data structures (such as the jbd2_inode structure) it handles much
more efficiently, in terms of wasted memory.

> It
> isn't just the number of currently-open files for write, because the
> jinfo isn't released until the inode is cleared from memory.  While
> I suspect that most inodes in cache are never opened for write, it
> would be worthwhile to compare the ext4_inode_cache object count
> against the jbd2_inode object count, and see how the total memory
> compares to a before-patch system running different workloads (with
> journal).

Sure.  It should be possible to release jinfo when the file is
completely closed, in ext4_release_file.  That would reduce the memory
footprint significantly.  I hadn't bothered with it too badly because
the jbd2_inode structure is only 48 bytes, and you can fit 85 of them
on a 4k page with only 16 bytes getting wasted.  But it's fair that we
release jinode once the inode is no longer used by any file descriptors.


I'll make the the other changes you suggested; thanks!!

							- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ