[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EA82926.8050502@gmail.com>
Date: Wed, 26 Oct 2011 11:37:10 -0400
From: KOSAKI Motohiro <kosaki.motohiro@...il.com>
To: dledford@...hat.com
CC: akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
linux-kernel@...r.kernel.org, joe.korty@...r.com, amwang@...hat.com
Subject: [PATCH 5/4] ipc/mqueue: revert bump up DFLT_*MAX
Mqueue limitation is slightly naieve parameter likes other ipcs
because unprivileged user can consume kernel memory by using ipcs.
Thus, too aggressive raise bring us security issue. Example,
current setting allow evil unprivileged user use 256GB (= 256
* 1024 * 1024*1024) and it's enough large to system will belome
unresponsive. Don't do that.
Instead, every admin should adjust the knobs for their own systems.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...il.com>
Cc: Doug Ledford <dledford@...hat.com>
Cc: Amerigo Wang <amwang@...hat.com>
Cc: Serge E. Hallyn <serue@...ibm.com>
Cc: Jiri Slaby <jslaby@...e.cz>
Cc: Joe Korty <joe.korty@...r.com>
---
include/linux/ipc_namespace.h | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/ipc_namespace.h b/include/linux/ipc_namespace.h
index e2bac00..2d7c5e0 100644
--- a/include/linux/ipc_namespace.h
+++ b/include/linux/ipc_namespace.h
@@ -118,12 +118,12 @@ extern int mq_init_ns(struct ipc_namespace *ns);
#define DFLT_QUEUESMAX 256
#define HARD_QUEUESMAX 1024
#define MIN_MSGMAX 1
-#define DFLT_MSG 64U
-#define DFLT_MSGMAX 1024
+#define DFLT_MSG 10U
+#define DFLT_MSGMAX 10
#define HARD_MSGMAX 65536
#define MIN_MSGSIZEMAX 128
#define DFLT_MSGSIZE 8192U
-#define DFLT_MSGSIZEMAX (1024*1024)
+#define DFLT_MSGSIZEMAX 8192
#define HARD_MSGSIZEMAX (16*1024*1024)
#else
static inline int mq_init_ns(struct ipc_namespace *ns) { return 0; }
--
1.7.5.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists