[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181002161942.1037-1-manfred@colorfullife.com>
Date: Tue, 2 Oct 2018 18:19:42 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: LKML <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Waiman Long <longman@...hat.com>
Cc: 1vier1@....de, Andrew Morton <akpm@...ux-foundation.org>,
Kees Cook <keescook@...omium.org>,
"Luis R . Rodriguez" <mcgrof@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Manfred Spraul <manfred@...orfullife.com>
Subject: [RFC, PATCH] ipc/util.c: use idr_alloc_cyclic() for ipc allocations
A bit related to the patch series that increases IPC_MNI:
(User space) id reuse create the risk of data corruption:
Process A: calls ipc function
Process A: sleeps just at the beginning of the syscall
Process B: Frees the ipc object (i.e.: calls ...ctl(IPC_RMID)
Process B: Creates a new ipc object (i.e.: calls ...get())
<If new object and old object have the same id>
Process A: is woken up, and accesses the new object
To reduce the probability that the new and the old object
have the same id, the current implementation adds a
sequence number to the index of the object in the idr tree.
To further reduce the probability for a reuse, switch from
idr_alloc to idr_alloc_cyclic.
The patch cycles over at least RADIX_TREE_MAP_SIZE, i.e.
if there is only a small number of objects, the accesses
continue to be direct.
As an option, this could be made dependent on the extended
mode: In extended mode, cycle over e.g. at least 16k ids.
Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
---
Open questions:
- Is there a significant performance advantage, especially
there are many ipc ids?
- Over how many ids should the code cycle always?
- Further review remarks?
ipc/util.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/ipc/util.c b/ipc/util.c
index 0af05752969f..6f83841f6761 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -216,10 +216,30 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
*/
if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
+ int idr_max;
+
new->seq = ids->seq++;
if (ids->seq > IPCID_SEQ_MAX)
ids->seq = 0;
- idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
+
+ /*
+ * If a user space visible id is reused, then this creates a
+ * risk for data corruption. To reduce the probability that
+ * a number is reduced, two approaches are used:
+ * 1) the idr index is allocated cyclically.
+ * 2) the use space id is build by concatenating the
+ * internal idr index with a sequence number
+ * To avoid that both numbers have the same cycle time, try
+ * to set the size for the cyclic alloc to an odd number.
+ */
+ idr_max = ids->in_use*2+1;
+ if (idr_max < RADIX_TREE_MAP_SIZE-1)
+ idr_max = RADIX_TREE_MAP_SIZE-1;
+ if (idr_max > IPCMNI)
+ idr_max = IPCMNI;
+
+ idx = idr_alloc_cyclic(&ids->ipcs_idr, new, 0, idr_max,
+ GFP_NOWAIT);
} else {
new->seq = ipcid_to_seqx(next_id);
idx = idr_alloc(&ids->ipcs_idr, new, ipcid_to_idx(next_id),
--
2.17.1
Powered by blists - more mailing lists