lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 21 Nov 2014 19:08:18 +0300
From:	Andrey Ryabinin <a.ryabinin@...sung.com>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	linux-kernel@...r.kernel.org, David Rientjes <rientjes@...gle.com>,
	Christoph Lameter <cl@...ux.com>,
	Andrey Ryabinin <a.ryabinin@...sung.com>
Subject: [PATCH v2 1/2] kernel: irq: use kmem_cache for allocating struct
 irq_desc

After enabling alignment checks in UBSan I've noticed a lot of
reports like this:

    UBSan: Undefined behaviour in ../kernel/irq/chip.c:195:14
    member access within misaligned address ffff88003e80d6f8
    for type 'struct irq_desc' which requires 64 byte alignment

struct irq_desc declared with ____cacheline_internodealigned_in_smp
attribute. However in some cases it allocated dynamically via kmalloc().
In general case kmalloc() guaranties only sizeof(void *) alignment.
We should use a separate slab cache to make struct irq_desc
properly aligned on SMP configuration.

This also could slightly reduce memory usage on some configurations.
E.g. in my setup sizeof(struct irq_desc) == 320. Which means that
kmalloc-512 will be used for allocating irg_desc via kmalloc().
In that case using separate slab cache will save us 192 bytes per
each irq_desc.

Signed-off-by: Andrey Ryabinin <a.ryabinin@...sung.com>
Acked-by: David Rientjes <rientjes@...gle.com>
---

Changes since v1:
  - Drop kmem_cache_zalloc_node() and use kmem_cache_alloc_node with __GFP_ZERO flag.

 kernel/irq/irqdesc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index a1782f8..c7a812c 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -23,6 +23,8 @@
  */
 static struct lock_class_key irq_desc_lock_class;
 
+static struct kmem_cache *irq_desc_cachep;
+
 #if defined(CONFIG_SMP)
 static void __init init_irq_default_affinity(void)
 {
@@ -137,7 +139,7 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 	struct irq_desc *desc;
 	gfp_t gfp = GFP_KERNEL;
 
-	desc = kzalloc_node(sizeof(*desc), gfp, node);
+	desc = kmem_cache_alloc_node(irq_desc_cachep, gfp | __GFP_ZERO, node);
 	if (!desc)
 		return NULL;
 	/* allocate based on nr_cpu_ids */
@@ -158,7 +160,7 @@ static struct irq_desc *alloc_desc(int irq, int node, struct module *owner)
 err_kstat:
 	free_percpu(desc->kstat_irqs);
 err_desc:
-	kfree(desc);
+	kmem_cache_free(irq_desc_cachep, desc);
 	return NULL;
 }
 
@@ -174,7 +176,7 @@ static void free_desc(unsigned int irq)
 
 	free_masks(desc);
 	free_percpu(desc->kstat_irqs);
-	kfree(desc);
+	kmem_cache_free(irq_desc_cachep, desc);
 }
 
 static int alloc_descs(unsigned int start, unsigned int cnt, int node,
@@ -218,6 +220,8 @@ int __init early_irq_init(void)
 
 	init_irq_default_affinity();
 
+	irq_desc_cachep = KMEM_CACHE(irq_desc, SLAB_PANIC);
+
 	/* Let arch update nr_irqs and return the nr of preallocated irqs */
 	initcnt = arch_probe_nr_irqs();
 	printk(KERN_INFO "NR_IRQS:%d nr_irqs:%d %d\n", NR_IRQS, nr_irqs, initcnt);
-- 
2.1.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ