[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200216064650.GB22092@hump.haifa.ibm.com>
Date: Sun, 16 Feb 2020 08:46:50 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Jonathan Corbet <corbet@....net>
Cc: linux-kernel@...r.kernel.org, Alan Cox <alan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
James Bottomley <jejb@...ux.ibm.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>,
Peter Zijlstra <peterz@...radead.org>,
"Reshetova, Elena" <elena.reshetova@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Tycho Andersen <tycho@...ho.ws>, linux-api@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC PATCH] mm: extend memfd with ability to create "secret"
memory areas
On Wed, Feb 12, 2020 at 02:10:29PM -0700, Jonathan Corbet wrote:
> On Thu, 30 Jan 2020 18:23:41 +0200
> Mike Rapoport <rppt@...nel.org> wrote:
>
> > Hi,
> >
> > This is essentially a resend of my attempt to implement "secret" mappings
> > using a file descriptor [1].
>
> So one little thing I was curious about as I read through the patch...
>
> > +static int secretmem_check_limits(struct vm_fault *vmf)
> > +{
> > + struct secretmem_state *state = vmf->vma->vm_file->private_data;
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + unsigned long limit;
> > +
> > + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > + return -EINVAL;
> > +
> > + limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> > + if (state->nr_pages + 1 >= limit)
> > + return -EPERM;
> > +
> > + return 0;
> > +}
>
> If I'm not mistaken, this means each memfd can be RLIMIT_MEMLOCK in length,
> with no global limit on the number of locked pages. What's keeping me from
> creating 1000 of these things and locking down lots of RAM?
Indeed, it's possible to lock down RLIMIT_MEMLOCK * RLIMIT_NOFILE of RAM
with this implementation, thanks for catching this.
I'll surely update the resource limiting once we've settle on the API
selection :)
> Thanks,
>
> jon
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists