[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20140923.122420.1216927815526255624.davem@davemloft.net>
Date: Tue, 23 Sep 2014 12:24:20 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: david.stevens@...cle.com
Cc: Raghuram.Kothakota@...cle.com, netdev@...r.kernel.org
Subject: Re: [PATCHv6 net-next 1/3] sunvnet: upgrade to VIO protocol
version 1.6
From: David L Stevens <david.stevens@...cle.com>
Date: Thu, 18 Sep 2014 09:03:52 -0400
> So, if this is actually too much memory, I was more inclined to reduce the ring
> size rather than either add complicating code to handle active-ring reallocation
> that would typically be run once per boot, or another alternative of adding
> module parameters to specify the buffer size TSO/GSO will need 64K to perform
> well, regardless of the device MTU.
The only reason we are having this discussion is because of how we
handle TX packets.
I think we really should directly map the SKBs in vnet_start_xmit()
instead of having these preallocate TX buffers.
The only thing to accomodate is the VNET_PACKET_SKIP, but that
shouldn't be hard at all.
And I am rather certain that an LDC map call will be cheaper than
copying the entire packet.
Then the MTU will have no material impact on per-vnet_port memory
costs, and bulk sending performance should also increase.
David I know you've worked hard on this patch set, but I'm going to
defer on this series for now. There are several implementation level
issues that are still seemingly up in the air.
I'm almost completely sold on your PMTU scheme, however if we do
direct mapping of SKBs in vnet_start_xmit() then the performance
characteristics with larger MTUs might be different.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists