lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100715085917.6a9cdd88@nehalam>
Date:	Thu, 15 Jul 2010 08:59:17 -0700
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	Junchang Wang <junchangwang@...il.com>, romieu@...zoreil.com,
	netdev@...r.kernel.org
Subject: Re: Question about way that NICs deliver packets to the kernel

On Thu, 15 Jul 2010 15:33:37 +0100
Ben Hutchings <bhutchings@...arflare.com> wrote:

> On Thu, 2010-07-15 at 22:24 +0800, Junchang Wang wrote:
> > Hi list,
> > My understand of the way that NICs deliver packets to the kernel is
> > as follows. Correct me if any of this is wrong. Thanks.
> > 
> > 1) The device buffer is fixed. When the kernel is acknowledged arrival of a 
> > new packet, it dynamically allocate a new skb and copy the packet into it. 
> > For example, 8139too.
> > 
> > 2) The device buffer is mapped by streaming DMA. When the kernel is 
> > acknowledged arrival of a new packet, it unmaps the region previously mapped. 
> > Obviously, there is NO memcpy operation. Additional cost is streaming DMA 
> > map/unmap operations. For example, e100 and e1000.
> > 
> > Here comes my question:
> > 1) Is there a principle indicating which one is better? Is streaming DMA
> > map/unmap operations more expensive than memcpy operation?
> 
> DMA should result in lower CPU usage and higher maximum performance.
> 
> > 2) Why does r8169 bias towards the first approach even if it support both? I 
> > convert r8169 to the second one and get a 5% performance boost. Below is result
> > running netperf TCP_STREAM test with 1.6K byte packet length.
> >         scheme 1    scheme 2    Imp.
> > r8169     683M        718M       5%
> [...]
> 
> You should also compare the CPU usage.

Also many drivers copy small receives into a new buffer
which saves space and often gives better performance.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ