[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111130173739.GI14515@moon>
Date: Wed, 30 Nov 2011 21:37:39 +0400
From: Cyrill Gorcunov <gorcunov@...il.com>
To: Kees Cook <keescook@...omium.org>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>, Andrew Vagin <avagin@...nvz.org>,
Serge Hallyn <serge.hallyn@...onical.com>,
Pavel Emelyanov <xemul@...allels.com>,
Vasiliy Kulikov <segoon@...nwall.com>
Subject: Re: [rfc 3/3] prctl: Add PR_SET_MM codes to tune up mm_struct entires
On Tue, Nov 29, 2011 at 12:40:57PM -0800, Kees Cook wrote:
> >
> > On the other hands these fields are set up by elf hanlder code, which
> > does mmap these areas, so we have to check that particular member
> > belongs to existing VMA and never cross user-space area, and together
> > with root-only approach would not it be enough? I'm sure missing something
> > that is why I'm asking.
>
> Right, if you verify that the addresses are actually inside valid
> userspace vmas, that is likely to be right, though there are probably
> other things I haven't thought of. The trouble is avoiding vdso, stack
> guard page, vsyscall, and anything else that isn't meant for the mm to
> have direct access to.
>
Hi Kees,
what about this one? Note that these mm_struct members don't affect
kernel much (at least as far as I see, except maybe brk,start_brk and
start_stack values), so I've added some sanity checks here, hope they
would fit. Still main protection is root-only access only. The kernel
itself uses vma_area::start/end members for overlows tests internally
so I think even passing crazy data here won't crash the kernel itself.
What do you think?
Cyrill
---
prctl: Add PR_SET_MM codes to tune up mm_struct entires v2
A few members of mm_struct such as start_code, end_code,
start_data, end_data, start_stack, start_brk, brk provided
by the kernel via /proc/$pid/stat and we use it at checkpoint
time.
At restore time we need a mechanism to restore those values
back and for this sake PR_SET_MM prctl code is introduced.
Note because of being a dangerous operation this inteface
is allowed for CAP_SYS_ADMIN only.
v2:
- Add a check for vma start address, testing for vma ending
address is not enough. From Kees Cook.
- Add some sanity tests for assigned addresses.
Signed-off-by: Cyrill Gorcunov <gorcunov@...nvz.org>
CC: Kees Cook <keescook@...omium.org>
---
include/linux/prctl.h | 12 +++++
kernel/sys.c | 118 ++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 130 insertions(+)
Index: linux-2.6.git/include/linux/prctl.h
===================================================================
--- linux-2.6.git.orig/include/linux/prctl.h
+++ linux-2.6.git/include/linux/prctl.h
@@ -102,4 +102,16 @@
#define PR_MCE_KILL_GET 34
+/*
+ * Tune up process memory map specifics.
+ */
+#define PR_SET_MM 35
+# define PR_SET_MM_START_CODE 1
+# define PR_SET_MM_END_CODE 2
+# define PR_SET_MM_START_DATA 3
+# define PR_SET_MM_END_DATA 4
+# define PR_SET_MM_START_STACK 5
+# define PR_SET_MM_START_BRK 6
+# define PR_SET_MM_BRK 7
+
#endif /* _LINUX_PRCTL_H */
Index: linux-2.6.git/kernel/sys.c
===================================================================
--- linux-2.6.git.orig/kernel/sys.c
+++ linux-2.6.git/kernel/sys.c
@@ -1692,6 +1692,118 @@ SYSCALL_DEFINE1(umask, int, mask)
return mask;
}
+static int prctl_set_mm(int opt, unsigned long addr)
+{
+ unsigned long rlim = rlimit(RLIMIT_DATA);
+ unsigned long vm_req_flags;
+ unsigned long vm_bad_flags;
+ struct vm_area_struct *vma;
+ struct mm_struct *mm;
+ int error = 0;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (addr >= TASK_SIZE)
+ return -EINVAL;
+
+ mm = get_task_mm(current);
+ if (!mm)
+ return -ENOENT;
+
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, addr);
+
+ if (opt != PR_SET_MM_START_BRK &&
+ opt != PR_SET_MM_BRK) {
+ /* It must be existing VMA */
+ if (!vma || vma->vm_start > addr)
+ goto out;
+ }
+
+ error = -EINVAL;
+ switch (opt) {
+ case PR_SET_MM_START_CODE:
+ case PR_SET_MM_END_CODE:
+
+ vm_req_flags = VM_READ | VM_EXEC;
+ vm_bad_flags = VM_WRITE | VM_MAYSHARE;
+
+ if ((vma->vm_flags & vm_req_flags) != vm_req_flags ||
+ (vma->vm_flags & vm_bad_flags))
+ goto out;
+
+ if (opt == PR_SET_MM_START_CODE)
+ current->mm->start_code = addr;
+ else
+ current->mm->end_code = addr;
+ break;
+
+ case PR_SET_MM_START_DATA:
+ case PR_SET_MM_END_DATA:
+
+ vm_req_flags = VM_READ | VM_WRITE;
+ vm_bad_flags = VM_EXEC | VM_MAYSHARE;
+
+ if ((vma->vm_flags & vm_req_flags) != vm_req_flags ||
+ (vma->vm_flags & vm_bad_flags))
+ goto out;
+
+ if (opt == PR_SET_MM_START_DATA)
+ current->mm->start_data = addr;
+ else
+ current->mm->end_data = addr;
+ break;
+
+ case PR_SET_MM_START_STACK:
+
+#ifdef CONFIG_STACK_GROWSUP
+ vm_req_flags = VM_READ | VM_WRITE | VM_GROWSUP;
+#else
+ vm_req_flags = VM_READ | VM_WRITE | VM_GROWSDOWN;
+#endif
+ if ((vma->vm_flags & vm_req_flags) != vm_req_flags)
+ goto out;
+
+ current->mm->start_stack = addr;
+ break;
+
+ case PR_SET_MM_START_BRK:
+ if (addr <= mm->end_data)
+ goto out;
+
+ if (rlim < RLIM_INFINITY &&
+ (mm->brk - addr) + (mm->end_data - mm->start_data) > rlim)
+ goto out;
+
+ current->mm->start_brk = addr;
+ break;
+
+ case PR_SET_MM_BRK:
+ if (addr <= mm->end_data)
+ goto out;
+
+ if (rlim < RLIM_INFINITY &&
+ (addr - mm->start_brk) + (mm->end_data - mm->start_data) > rlim)
+ goto out;
+
+ current->mm->brk = addr;
+ break;
+
+ default:
+ error = -EINVAL;
+ goto out;
+ }
+
+ error = 0;
+
+out:
+ up_read(&mm->mmap_sem);
+ mmput(mm);
+
+ return error;
+}
+
SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
unsigned long, arg4, unsigned long, arg5)
{
@@ -1841,6 +1953,12 @@ SYSCALL_DEFINE5(prctl, int, option, unsi
else
error = PR_MCE_KILL_DEFAULT;
break;
+ case PR_SET_MM: {
+ if (arg4 | arg5)
+ return -EINVAL;
+ error = prctl_set_mm(arg2, arg3);
+ break;
+ }
default:
error = -EINVAL;
break;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists