lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 15 Oct 2018 14:20:15 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Christian Brauner <christian@...uner.io>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        "Luis R. Rodriguez" <mcgrof@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Joe Lawrence <joe.lawrence@...hat.com>,
        Waiman Long <longman@...hat.com>,
        Dominik Brodowski <linux@...inikbrodowski.net>,
        Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH v1 2/2] sysctl: handle overflow for file-max

On Mon, Oct 15, 2018 at 9:28 AM, Christian Brauner <christian@...uner.io> wrote:
> On Mon, Oct 15, 2018 at 09:11:51AM -0700, Kees Cook wrote:
>> On Mon, Oct 15, 2018 at 3:55 AM, Christian Brauner <christian@...uner.io> wrote:
>> > Currently, when writing
>> >
>> > echo 18446744073709551616 > /proc/sys/fs/file-max
>> >
>> > /proc/sys/fs/file-max will overflow and be set to 0. That quickly
>> > crashes the system.
>> > This commit explicitly caps the value for file-max to ULONG_MAX.
>> >
>> > Note, this isn't technically necessary since proc_get_long() will already
>> > return ULONG_MAX. However, two reason why we still should do this:
>> > 1. it makes it explicit what the upper bound of file-max is instead of
>> >    making readers of the code infer it from proc_get_long() themselves
>> > 2. other tunebles than file-max may want to set a lower max value than
>> >    ULONG_MAX and we need to enable __do_proc_doulongvec_minmax() to handle
>> >    such cases too
>> >
>> > Cc: Kees Cook <keescook@...omium.org>
>> > Signed-off-by: Christian Brauner <christian@...uner.io>
>> > ---
>> > v0->v1:
>> > - if max value is < than ULONG_MAX use max as upper bound
>> > - (Dominik) remove double "the" from commit message
>> > ---
>> >  kernel/sysctl.c | 4 ++++
>> >  1 file changed, 4 insertions(+)
>> >
>> > diff --git a/kernel/sysctl.c b/kernel/sysctl.c
>> > index 97551eb42946..226d4eaf4b0e 100644
>> > --- a/kernel/sysctl.c
>> > +++ b/kernel/sysctl.c
>> > @@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
>> >  static int __maybe_unused two = 2;
>> >  static int __maybe_unused four = 4;
>> >  static unsigned long one_ul = 1;
>> > +static unsigned long ulong_max = ULONG_MAX;
>> >  static int one_hundred = 100;
>> >  static int one_thousand = 1000;
>> >  #ifdef CONFIG_PRINTK
>> > @@ -1696,6 +1697,7 @@ static struct ctl_table fs_table[] = {
>> >                 .maxlen         = sizeof(files_stat.max_files),
>> >                 .mode           = 0644,
>> >                 .proc_handler   = proc_doulongvec_minmax,
>> > +               .extra2         = &ulong_max,
>>
>> Don't we want this capped lower? The percpu comparisons, for example,
>> are all signed long. And there is at least this test, which could
>> overflow:
>>
>>         if (atomic_long_read(&unix_nr_socks) > 2 * get_max_files())
>>                 goto out;
>
> Does that check even make sense?
> Commit 518de9b39e854542de59bfb8b9f61c8f7ecf808b made get_max_files()
> return a long to bump the number of allowed files to more than 2^31.
>
> But assuming a platform where an unsigned long is 64bit which is what
> get_max_files() returns and atomic_long_read() is 64bit too this is
> guaranteed to overflow, no?  So I'm not clear what this is trying to do.
> Seems this should simply be:
>
> if (atomic_long_read(&unix_nr_socks) > get_max_files())
>         goto out;
>
> or am I missing a crucial point?
>
>>
>> Seems like max-files should be  SLONG_MAX / 2 or something instead?
>
> Hm. Isn't that a bit low? Iiuc, this would mean cutting the maximum
> number of open files in half? If at all shouldn't it be LONG_MAX?

LONG_MAX would align us with the values in the percpu stuff. I'm
really not sure what's happening in the sock check, but it's prone to
an unsigned multiplication overflow, if I'm reading it right. Probably
should just be a separate bug fix:

-         if (atomic_long_read(&unix_nr_socks) > 2 * get_max_files())
+         if (atomic_long_read(&unix_nr_socks) / 2 > get_max_files())

-Kees

>
>>
>> >         },
>> >         {
>> >                 .procname       = "nr_open",
>> > @@ -2795,6 +2797,8 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int
>> >                                 break;
>> >                         if (neg)
>> >                                 continue;
>> > +                       if (max && val > *max)
>> > +                               val = *max;
>> >                         val = convmul * val / convdiv;
>> >                         if ((min && val < *min) || (max && val > *max))
>> >                                 continue;
>> > --
>> > 2.17.1
>> >
>>
>> -Kees
>>
>> --
>> Kees Cook
>> Pixel Security



-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ