lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 26 Oct 2010 10:25:16 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Jean-Michel Hautbois <jhautbois@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: dev_alloc_skb and latency issues

Le mardi 26 octobre 2010 à 10:04 +0200, Jean-Michel Hautbois a écrit :
> Hi Everyone !
> 
> I am new to this mailing list, and I hope this kind of question hasn't
> already been solved before (didn't find anything in the archives...).
> I am facing some latency issues in the network layer (I am using a
> bridge in order to transmit data between one interface to another).
> 
> I am focusing on allocation of memory using alloc_skb for *every* new
> packet, and freeing of each packet before receiving a new one.
> My use case is quite easy : I always have similar packets (some bytes
> are changed, but the size is the same).
> I don't think I am the only one with such a use case, and am thinking
> about an optimization in this case (probably for others too) : why do
> we have to allocate using kmem_cache for all the new packets ?
> 
> We could probably use a little piece of code which would reuse the
> buffer if it hasn't to be used by anyone else.
> I am thinking that if the buffer is ready to be freed (in kfree_skb or
> skb_release_all for instance) then, mark the skb as "free" but do not
> actually free memory.
> On the next dev_alloc_skb, check this mark, and if it is present, do
> not allocate, and just "memset" the skb.
> 
> This would be in my point of view really efficient when packets are similar.
> Anyway, you probably have ideas about that stuff, and I am waiting for
> your advices about that :).

Once you add all necessary code to handle a new cache layer, you end in
a situation is brings nothing but extra cost and bugs (check recent
discussion about rx_recycle stuff in gianfar driver)

Really, kmem_cache is pretty fast and scalable. If not, better to work
on this, instead of adding yet another layer.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ