[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1d293d2fb0df99fdb0048825b4e39640840bfabb.camel@intel.com>
Date: Fri, 12 Oct 2018 14:49:28 -0700
From: Yu-cheng Yu <yu-cheng.yu@...el.com>
To: Andy Lutomirski <luto@...capital.net>, Jann Horn <jannh@...gle.com>
Cc: X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, linux-doc@...r.kernel.org,
Linux-MM <linux-mm@...ck.org>,
linux-arch <linux-arch@...r.kernel.org>,
Linux API <linux-api@...r.kernel.org>,
Arnd Bergmann <arnd@...db.de>,
Balbir Singh <bsingharora@...il.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Eugene Syromiatnikov <esyr@...hat.com>,
Florian Weimer <fweimer@...hat.com>,
"H. J. Lu" <hjl.tools@...il.com>, Jonathan Corbet <corbet@....net>,
Kees Cook <keescook@...omium.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadav Amit <nadav.amit@...il.com>,
Oleg Nesterov <oleg@...hat.com>, Pavel Machek <pavel@....cz>,
Peter Zijlstra <peterz@...radead.org>,
Randy Dunlap <rdunlap@...radead.org>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
"Shanbhogue, Vedvyas" <vedvyas.shanbhogue@...el.com>,
Daniel Micay <danielmicay@...il.com>
Subject: Re: [PATCH v5 07/27] mm/mmap: Create a guard area between VMAs
On Thu, 2018-10-11 at 13:55 -0700, Andy Lutomirski wrote:
> On Thu, Oct 11, 2018 at 1:39 PM Jann Horn <jannh@...gle.com> wrote:
> >
> > On Thu, Oct 11, 2018 at 5:20 PM Yu-cheng Yu <yu-cheng.yu@...el.com> wrote:
> > > Create a guard area between VMAs to detect memory corruption.
> >
> > [...]
> > > +config VM_AREA_GUARD
> > > + bool "VM area guard"
> > > + default n
> > > + help
> > > + Create a guard area between VM areas so that access beyond
> > > + limit can be detected.
> > > +
> > > endmenu
> >
> > Sorry to bring this up so late, but Daniel Micay pointed out to me
> > that, given that VMA guards will raise the number of VMAs by
> > inhibiting vma_merge(), people are more likely to run into
> > /proc/sys/vm/max_map_count (which limits the number of VMAs to ~65k by
> > default, and can't easily be raised without risking an overflow of
> > page->_mapcount on systems with over ~800GiB of RAM, see
> > https://lore.kernel.org/lkml/20180208021112.GB14918@bombadil.infradead.org/
> > and replies) with this change.
> >
> > Playing with glibc's memory allocator, it looks like glibc will use
> > mmap() for 128KB allocations; so at 65530*128KB=8GB of memory usage in
> > 128KB chunks, an application could run out of VMAs.
>
> Ugh.
>
> Do we have a free VM flag so we could do VM_GUARD to force a guard
> page? (And to make sure that, when a new VMA is allocated, it won't
> be directly adjacent to a VM_GUARD VMA.)
Maybe something like the following? These vm_start_gap()/vm_end_gap() are used
in many architectures. Do we want to put them in a different series? Comments?
Yu-cheng
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0416a7204be3..92b580542411 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -224,11 +224,13 @@ extern unsigned int kobjsize(const void *objp);
#define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit
architectures */
#define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit
architectures */
#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit
architectures */
+#define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit
architectures */
#define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
#define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
#define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
#define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
+#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5)
#endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
#ifdef CONFIG_ARCH_HAS_PKEYS
@@ -266,6 +268,12 @@ extern unsigned int kobjsize(const void *objp);
# define VM_MPX VM_NONE
#endif
+#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS
+#define VM_GUARD VM_HIGH_ARCH_5
+#else
+#define VM_GUARD VM_NONE
+#endif
+
#ifndef VM_GROWSUP
# define VM_GROWSUP VM_NONE
#endif
@@ -2417,24 +2425,34 @@ static inline struct vm_area_struct *
find_vma_intersection(struct mm_struct * m
-static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_start_gap(struct vm_area_struct *vma, vm_flags_t
flags)
{
unsigned long vm_start = vma->vm_start;
+ unsigned long gap = 0;
+
+ if (vma->vm_flags & VM_GROWSDOWN)
+ gap = stack_guard_gap;
+ else if ((vma->vm_flags & VM_GUARD) || (flags & VM_GUARD))
+ gap = PAGE_SIZE;
+
+ vm_start -= gap;
+ if (vm_start > vma->vm_start)
+ vm_start = 0;
- if (vma->vm_flags & VM_GROWSDOWN) {
- vm_start -= stack_guard_gap;
- if (vm_start > vma->vm_start)
- vm_start = 0;
- }
return vm_start;
}
-static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_end_gap(struct vm_area_struct *vma, vm_flags_t
flags)
{
unsigned long vm_end = vma->vm_end;
+ unsigned long gap = 0;
+
+ if (vma->vm_flags & VM_GROWSUP)
+ gap = stack_guard_gap;
+ else if ((vma->vm_flags & VM_GUARD) || (flags & VM_GUARD))
+ gap = PAGE_SIZE;
+
+ vm_end += gap;
+ if (vm_end < vma->vm_end)
+ vm_end = -PAGE_SIZE;
- if (vma->vm_flags & VM_GROWSUP) {
- vm_end += stack_guard_gap;
- if (vm_end < vma->vm_end)
- vm_end = -PAGE_SIZE;
- }
return vm_end;
}
Powered by blists - more mailing lists