lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 29 Aug 2022 23:18:30 +0800 From: Chao Peng <chao.p.peng@...ux.intel.com> To: Fuad Tabba <tabba@...gle.com> Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org, linux-fsdevel@...r.kernel.org, linux-api@...r.kernel.org, linux-doc@...r.kernel.org, qemu-devel@...gnu.org, linux-kselftest@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>, Sean Christopherson <seanjc@...gle.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Wanpeng Li <wanpengli@...cent.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>, Jeff Layton <jlayton@...nel.org>, "J . Bruce Fields" <bfields@...ldses.org>, Andrew Morton <akpm@...ux-foundation.org>, Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>, Steven Price <steven.price@....com>, "Maciej S . Szmigiero" <mail@...iej.szmigiero.name>, Vlastimil Babka <vbabka@...e.cz>, Vishal Annapurve <vannapurve@...gle.com>, Yu Zhang <yu.c.zhang@...ux.intel.com>, "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>, luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com, ak@...ux.intel.com, david@...hat.com, aarcange@...hat.com, ddutile@...hat.com, dhildenb@...hat.com, Quentin Perret <qperret@...gle.com>, Michael Roth <michael.roth@....com>, mhocko@...e.com, Muchun Song <songmuchun@...edance.com> Subject: Re: [PATCH v7 01/14] mm: Add F_SEAL_AUTO_ALLOCATE seal to memfd On Fri, Aug 26, 2022 at 04:19:32PM +0100, Fuad Tabba wrote: > Hi Chao, > > On Wed, Jul 6, 2022 at 9:25 AM Chao Peng <chao.p.peng@...ux.intel.com> wrote: > > > > Normally, a write to unallocated space of a file or the hole of a sparse > > file automatically causes space allocation, for memfd, this equals to > > memory allocation. This new seal prevents such automatically allocating, > > either this is from a direct write() or a write on the previously > > mmap-ed area. The seal does not prevent fallocate() so an explicit > > fallocate() can still cause allocating and can be used to reserve > > memory. > > > > This is used to prevent unintentional allocation from userspace on a > > stray or careless write and any intentional allocation should use an > > explicit fallocate(). One of the main usecases is to avoid memory double > > allocation for confidential computing usage where we use two memfds to > > back guest memory and at a single point only one memfd is alive and we > > want to prevent memory allocation for the other memfd which may have > > been mmap-ed previously. More discussion can be found at: > > > > https://lkml.org/lkml/2022/6/14/1255 > > > > Suggested-by: Sean Christopherson <seanjc@...gle.com> > > Signed-off-by: Chao Peng <chao.p.peng@...ux.intel.com> > > --- > > include/uapi/linux/fcntl.h | 1 + > > mm/memfd.c | 3 ++- > > mm/shmem.c | 16 ++++++++++++++-- > > 3 files changed, 17 insertions(+), 3 deletions(-) > > > > diff --git a/include/uapi/linux/fcntl.h b/include/uapi/linux/fcntl.h > > index 2f86b2ad6d7e..98bdabc8e309 100644 > > --- a/include/uapi/linux/fcntl.h > > +++ b/include/uapi/linux/fcntl.h > > @@ -43,6 +43,7 @@ > > #define F_SEAL_GROW 0x0004 /* prevent file from growing */ > > #define F_SEAL_WRITE 0x0008 /* prevent writes */ > > #define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */ > > +#define F_SEAL_AUTO_ALLOCATE 0x0020 /* prevent allocation for writes */ > > I think this should also be added to tools/include/uapi/linux/fcntl.h Yes, thanks. Chao > > Cheers, > /fuad > > > > /* (1U << 31) is reserved for signed error codes */ > > > > /* > > diff --git a/mm/memfd.c b/mm/memfd.c > > index 08f5f8304746..2afd898798e4 100644 > > --- a/mm/memfd.c > > +++ b/mm/memfd.c > > @@ -150,7 +150,8 @@ static unsigned int *memfd_file_seals_ptr(struct file *file) > > F_SEAL_SHRINK | \ > > F_SEAL_GROW | \ > > F_SEAL_WRITE | \ > > - F_SEAL_FUTURE_WRITE) > > + F_SEAL_FUTURE_WRITE | \ > > + F_SEAL_AUTO_ALLOCATE) > > > > static int memfd_add_seals(struct file *file, unsigned int seals) > > { > > diff --git a/mm/shmem.c b/mm/shmem.c > > index a6f565308133..6c8aef15a17d 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2051,6 +2051,8 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > > struct vm_area_struct *vma = vmf->vma; > > struct inode *inode = file_inode(vma->vm_file); > > gfp_t gfp = mapping_gfp_mask(inode->i_mapping); > > + struct shmem_inode_info *info = SHMEM_I(inode); > > + enum sgp_type sgp; > > int err; > > vm_fault_t ret = VM_FAULT_LOCKED; > > > > @@ -2113,7 +2115,12 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > > spin_unlock(&inode->i_lock); > > } > > > > - err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > > + if (unlikely(info->seals & F_SEAL_AUTO_ALLOCATE)) > > + sgp = SGP_NOALLOC; > > + else > > + sgp = SGP_CACHE; > > + > > + err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp, > > gfp, vma, vmf, &ret); > > if (err) > > return vmf_error(err); > > @@ -2459,6 +2466,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > struct inode *inode = mapping->host; > > struct shmem_inode_info *info = SHMEM_I(inode); > > pgoff_t index = pos >> PAGE_SHIFT; > > + enum sgp_type sgp; > > int ret = 0; > > > > /* i_rwsem is held by caller */ > > @@ -2470,7 +2478,11 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > return -EPERM; > > } > > > > - ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > + if (unlikely(info->seals & F_SEAL_AUTO_ALLOCATE)) > > + sgp = SGP_NOALLOC; > > + else > > + sgp = SGP_WRITE; > > + ret = shmem_getpage(inode, index, pagep, sgp); > > > > if (ret) > > return ret; > > -- > > 2.25.1 > >
Powered by blists - more mailing lists