lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20100904042841.2655.69990.sendpatchset@jupiter1-ltc-lp2.austin.ibm.com> Date: Fri, 03 Sep 2010 23:28:41 -0500 From: Santiago Leon <santil@...ux.vnet.ibm.com> To: netdev@...r.kernel.org Cc: brking@...ux.vnet.ibm.com, Santiago Leon <santil@...ux.vnet.ibm.com>, anton@...ba.org Subject: [patch 08/21] ibmveth: Add optional flush of rx buffer On some machines we can improve the bandwidth by ensuring rx buffers are not in the cache. Add a module option that is disabled by default that flushes rx buffers on insertion. Signed-off-by: Anton Blanchard <anton@...ba.org> Signed-off-by: Santiago Leon <santil@...ux.vnet.ibm.com> --- Index: net-next-2.6/drivers/net/ibmveth.c =================================================================== --- net-next-2.6.orig//drivers/net/ibmveth.c 2010-09-03 22:18:54.000000000 -0500 +++ net-next-2.6/drivers/net/ibmveth.c 2010-09-03 22:18:54.000000000 -0500 @@ -127,6 +127,10 @@ module_param(rx_copybreak, uint, 0644); MODULE_PARM_DESC(rx_copybreak, "Maximum size of packet that is copied to a new buffer on receive"); +static unsigned int rx_flush __read_mostly = 0; +module_param(rx_flush, uint, 0644); +MODULE_PARM_DESC(rx_flush, "Flush receive buffers before use"); + struct ibmveth_stat { char name[ETH_GSTRING_LEN]; int offset; @@ -234,6 +238,14 @@ static int ibmveth_alloc_buffer_pool(str return 0; } +static inline void ibmveth_flush_buffer(void *addr, unsigned long length) +{ + unsigned long offset; + + for (offset = 0; offset < length; offset += SMP_CACHE_BYTES) + asm("dcbfl %0,%1" :: "b" (addr), "r" (offset)); +} + /* replenish the buffers for a pool. note that we don't need to * skb_reserve these since they are used for incoming... */ @@ -286,6 +298,12 @@ static void ibmveth_replenish_buffer_poo desc.fields.flags_len = IBMVETH_BUF_VALID | pool->buff_size; desc.fields.address = dma_addr; + if (rx_flush) { + unsigned int len = min(pool->buff_size, + adapter->netdev->mtu + + IBMVETH_BUFF_OH); + ibmveth_flush_buffer(skb->data, len); + } lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc); if (lpar_rc != H_SUCCESS) @@ -1095,6 +1113,9 @@ static int ibmveth_poll(struct napi_stru skb_copy_to_linear_data(new_skb, skb->data + offset, length); + if (rx_flush) + ibmveth_flush_buffer(skb->data, + length + offset); skb = new_skb; ibmveth_rxq_recycle_buffer(adapter); } else { -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists