lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 26 Nov 2009 23:25:27 +0100
From:	Andi Kleen <andi@...stfloor.org>
To:	Andrew Grover <andy.grover@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: how expensive are mallocs?

Andrew Grover <andy.grover@...il.com> writes:

> How much effort generally makes sense to avoid mallocs? For example,

The slab allocators are very optimized, with a fast path for
allocation and freeing that's essentially "disable interrupts ; 
unlink object from a list ; reenable". That's not expensive
(unless you're running on a CPU where disabling interrupts is)

The main costs in them come when you free objects on a different CPU
(or worse node) than where you allocate them. In that case you can end
up with some bounced cache lines, which are slow. If you can avoid
that you're good. If you can't even then you would need to make major
effort to do better.

> Also, RDS has its own per-cpu page remainder allocator (see
> net/rds/page.c) for kernel send buffers. Would cutting this code and
> just using kmalloc be recommended? Doesn't SL?B already do per-cpu
> pools?

Slab is all per cpu in the fast path, but see above.

> Does this stuff even matter enough to rise above the noise in benchmarks?

Yes it does in some circumstances, but it's hard to do better.

One example of doing better for special circumstances would be Eric's
rps work, but doing these things is not easy and only worth it
for really critical cases.

-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists