[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20161028181129.7311-1-colin.king@canonical.com>
Date: Fri, 28 Oct 2016 19:11:29 +0100
From: Colin King <colin.king@...onical.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Manfred Spraul <manfred@...orfullife.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Nikolay Borisov <kernel@...p.com>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH] ipc/sem: ensure we left shift a ULL rather than a 32 bit integer
From: Colin Ian King <colin.king@...onical.com>
The left shift amount is sop->sem_num % 64, which is up to 63, so
ensure we are shifting a ULL rather than a 32 bit value.
CoverityScan CID#1372862 "Bad bit shift operation"
Fixes: 7c24530cb4e3c0ae ("ipc/sem: optimize perform_atomic_semop()")
Signed-off-by: Colin Ian King <colin.king@...onical.com>
---
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index ebd18a7..ca4aa23 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
max = 0;
for (sop = sops; sop < sops + nsops; sop++) {
- unsigned long mask = 1 << ((sop->sem_num) % BITS_PER_LONG);
+ unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG);
if (sop->sem_num >= max)
max = sop->sem_num;
--
2.9.3
Powered by blists - more mailing lists