lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 27 Dec 2019 16:35:36 +0000
From:   Chris Down <chris@...isdown.name>
To:     Amir Goldstein <amir73il@...il.com>
Cc:     linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Al Viro <viro@...iv.linux.org.uk>,
        Matthew Wilcox <willy@...radead.org>,
        Jeff Layton <jlayton@...nel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Tejun Heo <tj@...nel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>, kernel-team@...com
Subject: Re: [PATCH 3/3] shmem: Add support for using full width of ino_t

Amir Goldstein writes:
>On Fri, Dec 27, 2019 at 4:30 PM Chris Down <chris@...isdown.name> wrote:
>>
>> The new inode64 option now uses get_next_ino_full, which always uses the
>> full width of ino_t (as opposed to get_next_ino, which always uses
>> unsigned int).
>>
>> Using inode64 makes inode number wraparound significantly less likely,
>> at the cost of making some features that rely on the underlying
>> filesystem not setting any of the highest 32 bits (eg. overlayfs' xino)
>> not usable.
>
>That's not an accurate statement. overlayfs xino just needs some high
>bits available. Therefore I never had any objection to having tmpfs use
>64bit ino values (from overlayfs perspective). My only objection is to
>use the same pool "irresponsibly" instead of per-sb pool for the heavy
>users.

Per-sb get_next_ino is fine, but seems less important if inode64 is used. Or is 
your point about people who would still be using inode32?

I think things have become quite unclear in previous discussions, so I want to 
make sure we're all on the same page here. Are you saying you would 
theoretically ack the following series?

1. Recycle volatile slabs in tmpfs/hugetlbfs
2. Make get_next_ino per-sb
3. Make get_next_ino_full (which is also per-sb)
4. Add inode{32,64} to tmpfs

To keep this thread as high signal as possible, I'll avoid sending any other 
patches until I hear back on that :-)

Thanks again,

Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ