[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20170803184136.13855-2-dave@stgolabs.net>
Date: Thu, 3 Aug 2017 11:41:36 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: akpm@...ux-foundation.org
Cc: manfred@...orfullife.com, dave@...olabs.net,
linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 2/2] ipc/sem: play nicer with large nsops allocations
Replacing semop()'s kmalloc for kvmalloc was originally proposed by
Manfred on the premise that it can be called for large (than order-1)
sizes. For example, while Oracle recommends setting SEMOPM to a _minimum_
of 100, some distros[1] encourage the setting to be a factor of the
amount of db tasks (PROCESSES), which can get fishy for large systems
(easily going beyond 1000).
[1] An Example of Semaphore Settings
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Setting_Semaphores-An_Example_of_Semaphore_Settings.html
So lets just convert this to kvmalloc, just like the rest of the allocations
we do in ipc. While the fallback vmalloc obviously involves more overhead,
this by far the uncommon path, and it's better for the user than just
erroring out with kmalloc.
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
ipc/sem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index e9980cba07fd..30d80bfc1ec8 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1784,7 +1784,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
if (nsops > ns->sc_semopm)
return -E2BIG;
if (nsops > SEMOPM_FAST) {
- sops = kmalloc(sizeof(*sops)*nsops, GFP_KERNEL);
+ sops = kvmalloc(sizeof(*sops)*nsops, GFP_KERNEL);
if (sops == NULL)
return -ENOMEM;
}
@@ -2016,7 +2016,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
rcu_read_unlock();
out_free:
if (sops != fast_sops)
- kfree(sops);
+ kvfree(sops);
return error;
}
--
2.12.0
Powered by blists - more mailing lists