lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180208213743.GC3424@bombadil.infradead.org>
Date:   Thu, 8 Feb 2018 13:37:43 -0800
From:   Matthew Wilcox <willy@...radead.org>
To:     Daniel Micay <danielmicay@...il.com>
Cc:     Jann Horn <jannh@...gle.com>, linux-mm@...ck.org,
        Kernel Hardening <kernel-hardening@...ts.openwall.com>,
        kernel list <linux-kernel@...r.kernel.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: [RFC] Limit mappings to ten per page per process

On Thu, Feb 08, 2018 at 12:21:00PM -0800, Matthew Wilcox wrote:
> Now that I think about it, though, perhaps the simplest solution is not
> to worry about checking whether _mapcount has saturated, and instead when
> adding a new mmap, check whether this task already has it mapped 10 times.
> If so, refuse the mapping.

That turns out to be quite easy.  Comments on this approach?

diff --git a/mm/mmap.c b/mm/mmap.c
index 9efdc021ad22..fd64ff662117 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1615,6 +1615,34 @@ static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags)
 	return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE;
 }
 
+/**
+ * mmap_max_overlaps - Check the process has not exceeded its quota of mappings.
+ * @mm: The memory map for the process creating the mapping.
+ * @file: The file the mapping is coming from.
+ * @pgoff: The start of the mapping in the file.
+ * @count: The number of pages to map.
+ *
+ * Return: %true if this region of the file has too many overlapping mappings
+ *         by this process.
+ */
+bool mmap_max_overlaps(struct mm_struct *mm, struct file *file,
+			pgoff_t pgoff, pgoff_t count)
+{
+	unsigned int overlaps = 0;
+	struct vm_area_struct *vma;
+
+	if (!file)
+		return false;
+
+	vma_interval_tree_foreach(vma, &file->f_mapping->i_mmap,
+				  pgoff, pgoff + count) {
+		if (vma->vm_mm == mm)
+			overlaps++;
+	}
+
+	return overlaps > 9;
+}
+
 unsigned long mmap_region(struct file *file, unsigned long addr,
 		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
 		struct list_head *uf)
@@ -1640,6 +1668,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 			return -ENOMEM;
 	}
 
+	if (mmap_max_overlaps(mm, file, pgoff, len >> PAGE_SHIFT))
+		return -ENOMEM;
+
 	/* Clear old maps */
 	while (find_vma_links(mm, addr, addr + len, &prev, &rb_link,
 			      &rb_parent)) {
diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..27cf5cf9fc0f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -430,6 +430,10 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr,
 				(new_len - old_len) >> PAGE_SHIFT))
 		return ERR_PTR(-ENOMEM);
 
+	if (mmap_max_overlaps(mm, vma->vm_file, pgoff,
+				(new_len - old_len) >> PAGE_SHIFT))
+		return ERR_PTR(-ENOMEM);
+
 	if (vma->vm_flags & VM_ACCOUNT) {
 		unsigned long charged = (new_len - old_len) >> PAGE_SHIFT;
 		if (security_vm_enough_memory_mm(mm, charged))

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ