[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zkg7OCSYJ7rzv6_D@casper.infradead.org>
Date: Sat, 18 May 2024 06:23:04 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Jeff Layton <jlayton@...nel.org>
Cc: Christian Brauner <brauner@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, Amir Goldstein <amir73il@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fs: switch timespec64 fields in inode to discrete
integers
On Fri, May 17, 2024 at 08:08:40PM -0400, Jeff Layton wrote:
> For reference (according to pahole):
>
> Before: /* size: 624, cachelines: 10, members: 53 */
> After: /* size: 616, cachelines: 10, members: 56 */
Smaller is always better, but for a meaningful improvement, we'd want
to see more. On my laptop running a Debian 6.6.15 kernel, I see:
inode_cache 11398 11475 640 25 4 : tunables 0 0 0 : slabdata 459 459 0
so there's 25 inodes per 4 pages. Going down to 632 is still 25 per 4
pages. At 628 bytes, we get 26 per 4 pages. Ar 604 bytes, we're at 27.
And at 584 bytes, we get 28.
Of course, struct inode gets embedded in a lot of filesystem inodes.
xfs_inode 142562 142720 1024 32 8 : tunables 0 0 0 : slabdata 4460 4460 0
ext4_inode_cache 81 81 1184 27 8 : tunables 0 0 0 : slabdata 3 3 0
sock_inode_cache 2123 2223 832 39 8 : tunables 0 0 0 : slabdata 57 57 0
So any of them might cross a magic boundary where we suddenly get more
objects per slab.
Not trying to diss the work you've done here, just pointing out the
limits for anyone who's trying to do something similar. Or maybe
inspire someone to do more reductions ;-)
Powered by blists - more mailing lists