[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68d2b3afd9a7ee27cdb7ec9ff7eb45342ce23c12.camel@HansenPartnership.com>
Date: Fri, 20 Aug 2021 12:40:26 -0700
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Jordy Zomer <jordy@...ing.systems>,
Kees Cook <keescook@...omium.org>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Mike Rapoport <rppt@...ux.ibm.com>
Subject: Re: [PATCH] mm/secretmem: use refcount_t instead of atomic_t
On Fri, 2021-08-20 at 12:38 -0400, Jordy Zomer wrote:
> Hi There!
>
> Because this is a global variable, it appears to be exploitable.
> Either we generate a sufficient number of processes to achieve this
> counter, or you increase the open file limit with ulimit or sysctl.
> Unless the kernel has a hard restriction on the number of potential
> file descriptors that I'm not aware of.
There's no direct global counter for file descriptors, no; however,
there is an indirect limit: the number of processes per user which is
now defaulting to around 65535, so even a fork bomb opening the max
number of fds won't get you a wrap.
> In any case, it's probably a good idea to patch this to make it
> explicitly secure. If you discover a hard-limit in the kernel for
> open file descriptors, please let me know. I'm genuinely interested
> :D!
I didn't disagree it might be a useful think to update ... I just
didn't think it was currently exploitable.
James
Powered by blists - more mailing lists