lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 20 Jun 2009 09:24:52 +0530
From:	Peter Chacko <peterchacko35@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	rick.jones2@...com, radhamohan_ch@...oo.com, netdev@...r.kernel.org
Subject: Re: can we reuse an skb

But if a network application is holding on to a NIC-drive level pooled
buffers,  we also have architectural issues in violating layered
software design. Application plays at a stateful protocol level, while
driver should be stateless and flow-unaware.

Another thought in the lines of radha's original idea:

Assume that we have n-cores capable of processing packet at the same
time. if we trade off memory for computing, why don't we pre-allocate
"n" dedicated skb buffers regardless of the size of each packet, but
just as big as the size of the MTU itself.(forget JUMBO packet for
now).(  today, dev_alloc_skb() allocate based on the packet len, which
is memory usage optimized.).

each dedicated memory buffer is now a per-CPU/thread data structure.

In the  IO stack world , we have buffer cache, identified by buffer
headers. That design was conceived so early as each block level write
was typically 512 bytes and same for all blocks. why don't we adapt
that into network stack ?

Could you share me the problems  with this approach ?


thanks
peter.
On Sat, Jun 20, 2009 at 4:59 AM, David Miller<davem@...emloft.net> wrote:
> From: Rick Jones <rick.jones2@...com>
> Date: Fri, 19 Jun 2009 09:56:06 -0700
>
>> Assuming a driver did have its own "pool" and didn't rely on the
>> pool(s) from which skbs are drawn, doesn't that mean you have to now
>> have another configuable?  There is no good guarantees on when the
>> upper layers will be finished with the skb right?  Which means you
>> would be requiring the admin(s) to have an idea of how long their
>> applications wait to pull data from their sockets and configure your
>> driver accordingly.
>>
>> It would seem there would have to be a considerable performance gain
>> demonstrated for that kind of thing?
>
> Applications can hold onto such data "forever" if they want to.
>
> Any scheme which doesn't allow dynamically increasing the pool
> is prone to trivial DoS.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ