lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 20 Mar 2007 01:36:11 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Davide Libenzi <davidel@...ilserver.org>
CC:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 6/13] signal/timer/event fds v7 - timerfd core ...

Davide Libenzi a écrit :

> +struct timerfd_ctx {
> +	struct hrtimer tmr;
> +	ktime_t tintv;
> +	spinlock_t lock;
> +	wait_queue_head_t wqh;
> +	unsigned long ticks;
> +};

> +static struct kmem_cache *timerfd_ctx_cachep;

> +	timerfd_ctx_cachep = kmem_cache_create("timerfd_ctx_cache",
> +						sizeof(struct timerfd_ctx),
> +						0, SLAB_PANIC, NULL, NULL);


Do we really expect thousands of active timerfd_ctx ?

If not, using kmalloc()/kfree() would be fine, because sizeof(struct 
timerfd_ctx) is so small.

on SMP / NUMA platforms, each new kmem_cache is rather expensive. (memory 
allocated at kmem_cache_create(), but also memory used when cache is not 
empty, with slabs in freelist for each cpu/node)

Using a general cache might be cheaper : No memory overhead for yet another 
kmem_cache.

I know individual caches are good to spot memory leaks, but in timerfd case, 
you dont have mem leaks, do you ? :)



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists