[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DS0PR11MB6373D347AFCDD2A860BC9DEEDCD39@DS0PR11MB6373.namprd11.prod.outlook.com>
Date: Mon, 30 Jan 2023 06:04:33 +0000
From: "Wang, Wei W" <wei.w.wang@...el.com>
To: Ackerley Tng <ackerleytng@...gle.com>,
Chao Peng <chao.p.peng@...ux.intel.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"qemu-devel@...gnu.org" <qemu-devel@...gnu.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"corbet@....net" <corbet@....net>,
"Christopherson,, Sean" <seanjc@...gle.com>,
"vkuznets@...hat.com" <vkuznets@...hat.com>,
"wanpengli@...cent.com" <wanpengli@...cent.com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"joro@...tes.org" <joro@...tes.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "arnd@...db.de" <arnd@...db.de>,
"naoya.horiguchi@....com" <naoya.horiguchi@....com>,
"linmiaohe@...wei.com" <linmiaohe@...wei.com>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"hughd@...gle.com" <hughd@...gle.com>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"bfields@...ldses.org" <bfields@...ldses.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"shuah@...nel.org" <shuah@...nel.org>,
"rppt@...nel.org" <rppt@...nel.org>,
"steven.price@....com" <steven.price@....com>,
"mail@...iej.szmigiero.name" <mail@...iej.szmigiero.name>,
"vbabka@...e.cz" <vbabka@...e.cz>,
"Annapurve, Vishal" <vannapurve@...gle.com>,
"yu.c.zhang@...ux.intel.com" <yu.c.zhang@...ux.intel.com>,
"chao.p.peng@...ux.intel.com" <chao.p.peng@...ux.intel.com>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"Lutomirski, Andy" <luto@...nel.org>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"Hansen, Dave" <dave.hansen@...el.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"david@...hat.com" <david@...hat.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
"ddutile@...hat.com" <ddutile@...hat.com>,
"dhildenb@...hat.com" <dhildenb@...hat.com>,
"qperret@...gle.com" <qperret@...gle.com>,
"tabba@...gle.com" <tabba@...gle.com>,
"michael.roth@....com" <michael.roth@....com>,
"Hocko, Michal" <mhocko@...e.com>
Subject: RE: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to
create restricted user memory
On Monday, January 30, 2023 1:26 PM, Ackerley Tng wrote:
>
> > +static int restrictedmem_getattr(struct user_namespace *mnt_userns,
> > + const struct path *path, struct kstat *stat,
> > + u32 request_mask, unsigned int query_flags)
> {
> > + struct inode *inode = d_inode(path->dentry);
> > + struct restrictedmem_data *data = inode->i_mapping-
> >private_data;
> > + struct file *memfd = data->memfd;
> > +
> > + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat,
> > + request_mask, query_flags);
>
> Instead of calling shmem's getattr() with path, we should be using the the
> memfd's path.
>
> Otherwise, shmem's getattr() will use restrictedmem's inode instead of
> shmem's inode. The private fields will be of the wrong type, and the host will
> crash when shmem_is_huge() does SHMEM_SB(inode->i_sb)->huge), since
> inode->i_sb->s_fs_info is NULL for the restrictedmem's superblock.
>
> Here's the patch:
>
> diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c index
> 37191cd9eed1..06b72d593bd8 100644
> --- a/mm/restrictedmem.c
> +++ b/mm/restrictedmem.c
> @@ -84,7 +84,7 @@ static int restrictedmem_getattr(struct user_namespace
> *mnt_userns,
> struct restrictedmem *rm = inode->i_mapping->private_data;
> struct file *memfd = rm->memfd;
>
> - return memfd->f_inode->i_op->getattr(mnt_userns, path, stat,
> + return memfd->f_inode->i_op->getattr(mnt_userns, &memfd-
> >f_path, stat,
> request_mask, query_flags);
> }
>
Nice catch. I also encountered this issue during my work.
The fix can further be enforced by shmem:
index c301487be5fb..d850c0190359 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -472,8 +472,9 @@ bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode,
pgoff_t index, bool shmem_huge_force)
{
loff_t i_size;
+ struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
- if (!S_ISREG(inode->i_mode))
+ if (!sbinfo || !S_ISREG(inode->i_mode))
return false;
if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) ||
test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))
@@ -485,7 +486,7 @@ bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode,
if (shmem_huge == SHMEM_HUGE_DENY)
return false;
- switch (SHMEM_SB(inode->i_sb)->huge) {
+ switch (sbinfo->huge) {
case SHMEM_HUGE_ALWAYS:
return true;
case SHMEM_HUGE_WITHIN_SIZE:
Powered by blists - more mailing lists