lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f808b4a0906202241l1d74364ft52d99ecb6b0274f5@mail.gmail.com>
Date:	Sun, 21 Jun 2009 11:11:16 +0530
From:	Peter Chacko <peterchacko35@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	rick.jones2@...com, radhamohan_ch@...oo.com, netdev@...r.kernel.org
Subject: Re: can we reuse an skb

Hi Dave,

Here i am considering a special case where Linux stack is not used in
a host environment. Its dedicated packet processor. Application data
is not expected. (discarded if thats the case).

In the case of a host, yes we cannot pre-allocate buffers as it would
requires us to create another copy at the L4  junction. Current memory
allocation for packet buffers are only meant for this case. Not meant
for a special case when the Linux box is a 100% packet processor.I
wish to argue that in this special case, we don't need an skb_alloc()
or skb_free() sorts of interface.

What i am considering here is the super optimization of memory buffers
for a multi-layer packet processor, without needing to move packets
into user space. In that case, i am optimizing my custom network stack
with a pre-allocated MTU sized and a few  jumbo-sized buffers. And no
interrupts as i do NAPI at all times, as this is a dedicated
appliance. I keep all these buffers in the L1 cache and hence i have
different sets of pools for different cores. I  am currently guiding
my engineers to implement the code changes now..

Seeking your advice if any body has done this already or a patch is
available or any advice against this ...

I would greatly appreciate if any one of you could share me your
experience in related work.



Best regards,
Peter chacko.
On Sat, Jun 20, 2009 at 4:59 AM, David Miller<davem@...emloft.net> wrote:
> From: Rick Jones <rick.jones2@...com>
> Date: Fri, 19 Jun 2009 09:56:06 -0700
>
>> Assuming a driver did have its own "pool" and didn't rely on the
>> pool(s) from which skbs are drawn, doesn't that mean you have to now
>> have another configuable?  There is no good guarantees on when the
>> upper layers will be finished with the skb right?  Which means you
>> would be requiring the admin(s) to have an idea of how long their
>> applications wait to pull data from their sockets and configure your
>> driver accordingly.
>>
>> It would seem there would have to be a considerable performance gain
>> demonstrated for that kind of thing?
>
> Applications can hold onto such data "forever" if they want to.
>
> Any scheme which doesn't allow dynamically increasing the pool
> is prone to trivial DoS.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ