lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1551379645-819-4-git-send-email-longman@redhat.com>
Date:   Thu, 28 Feb 2019 13:47:25 -0500
From:   Waiman Long <longman@...hat.com>
To:     "Luis R. Rodriguez" <mcgrof@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jonathan Corbet <corbet@....net>
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-doc@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
        Matthew Wilcox <willy@...radead.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Takashi Iwai <tiwai@...e.de>, Davidlohr Bueso <dbueso@...e.de>,
        Manfred Spraul <manfred@...orfullife.com>,
        Waiman Long <longman@...hat.com>
Subject: [PATCH v12 3/3] ipc: Do cyclic id allocation with ipcmni_extend mode

For ipcmni_extend mode, the sequence number space is only 7 bits. So
the chance of id reuse is relatively high compared with the non-extended
mode.

To alleviate this id reuse problem, the id allocation will be done
cyclically to cycle through all the 24-bit id space before wrapping
around when in ipcmni_extend mode. This may cause the use of more memory
in term of the number of xa_nodes allocated as well as potentially more
cachelines used as the xa_nodes may be spread more sparsely in this case.

There is probably a slight memory and performance cost in doing cyclic
id allocation. For applications that really need more than 32k unique IPC
identifiers, this is a small price to pay to avoid the id reuse problem.

As a result, the chance of id reuse should be even smaller in the
ipcmni_extend mode. For users who worry about id reuse, they can
turn on ipcmni_extend mode, even if they don't need more than 32k
IPC identifiers.

Signed-off-by: Waiman Long <longman@...hat.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 5 ++++-
 ipc/ipc_sysctl.c                                | 2 ++
 ipc/util.c                                      | 7 ++++++-
 ipc/util.h                                      | 2 ++
 4 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 074b775..bb851d0 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1813,7 +1813,10 @@
 			See Documentation/filesystems/nfs/nfsroot.txt.
 
 	ipcmni_extend	[KNL] Extend the maximum number of unique System V
-			IPC identifiers from 32,768 to 16,777,216.
+			IPC identifiers from 32,768 to 16,777,216. Also do
+			cyclical identifier allocation through the entire
+			24-bit identifier space to reduce the chance of
+			identifier reuse.
 
 	irqaffinity=	[SMP] Set the default irq affinity mask
 			The argument is a cpu list, as described above.
diff --git a/ipc/ipc_sysctl.c b/ipc/ipc_sysctl.c
index 73b7782..d9ac6ca 100644
--- a/ipc/ipc_sysctl.c
+++ b/ipc/ipc_sysctl.c
@@ -122,6 +122,7 @@ static int proc_ipc_sem_dointvec(struct ctl_table *table, int write,
 static int int_max = INT_MAX;
 int ipc_mni = IPCMNI;
 int ipc_mni_shift = IPCMNI_SHIFT;
+bool ipc_mni_extended;
 
 static struct ctl_table ipc_kern_table[] = {
 	{
@@ -252,6 +253,7 @@ static int __init ipc_mni_extend(char *str)
 {
 	ipc_mni = IPCMNI_EXTEND;
 	ipc_mni_shift = IPCMNI_EXTEND_SHIFT;
+	ipc_mni_extended = true;
 	pr_info("IPCMNI extended to %d.\n", ipc_mni);
 	return 0;
 }
diff --git a/ipc/util.c b/ipc/util.c
index 0a835a4..78e14ac 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -221,7 +221,12 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
 	 */
 
 	if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
-		idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
+		if (ipc_mni_extended)
+			idx = idr_alloc_cyclic(&ids->ipcs_idr, new, 0, ipc_mni,
+						GFP_NOWAIT);
+		else
+			idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
+
 		if ((idx <= ids->last_idx) && (++ids->seq > IPCID_SEQ_MAX))
 			ids->seq = 0;
 		new->seq = ids->seq;
diff --git a/ipc/util.h b/ipc/util.h
index 6a88d51..9f0dd79 100644
--- a/ipc/util.h
+++ b/ipc/util.h
@@ -33,6 +33,7 @@
 #ifdef CONFIG_SYSVIPC_SYSCTL
 extern int ipc_mni;
 extern int ipc_mni_shift;
+extern bool ipc_mni_extended;
 
 #define IPCMNI_SEQ_SHIFT	ipc_mni_shift
 #define IPCMNI_IDX_MASK		((1 << ipc_mni_shift) - 1)
@@ -40,6 +41,7 @@
 #else /* CONFIG_SYSVIPC_SYSCTL */
 
 #define ipc_mni			IPCMNI
+#define ipc_mni_extended	false
 #define IPCMNI_SEQ_SHIFT	IPCMNI_SHIFT
 #define IPCMNI_IDX_MASK		((1 << IPCMNI_SHIFT) - 1)
 #endif /* CONFIG_SYSVIPC_SYSCTL */
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ