lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20150219.142531.1581772172061951693.davem@davemloft.net>
Date:	Thu, 19 Feb 2015 14:25:31 -0500 (EST)
From:	David Miller <davem@...emloft.net>
To:	_govind@....com
Cc:	netdev@...r.kernel.org, benve@...co.com, ssujith@...co.com
Subject: Re: [PATCH net-next v2 1/2] enic: implement frag allocator

From: Govindarajulu Varadarajan <_govind@....com>
Date: Wed, 11 Feb 2015 18:29:17 +0530

> This patch implements frag allocator for rq buffer. This is based on
> __alloc_page_frag & __page_frag_refill implementation in net/core/skbuff.c
> 
> In addition to frag allocation from order(3) page in __alloc_page_frag,
> we also maintain dma address of the page. While allocating a frag for rx buffer
> we return va + offset for virtual address of the frag, and pa + offset for
> dma address of the frag. This reduces the number of calls to dma_map()
> by 1/3 for 9k mtu and by 1/20 for 1500 mtu.
> 
> __alloc_page_frag is limited to max buffer size of PAGE_SIZE, i.e 4096 in most
> of the cases. So 9k buffer allocation goes through kmalloc which return
> page of order 2, 16k. We waste 7k bytes for every 9k buffer.
> 
> we maintain dma_count variable which is incremented when we allocate a frag.
> enic_unmap_dma will decrement the dma_count and unmap it when there is no user
> of that page in rx ring.
> 
> This reduces the memory utilization for 9k mtu by 33%.
> 
> Signed-off-by: Govindarajulu Varadarajan <_govind@....com>

This is a nice optimization, but this is definitely useful for other drivers
rather than just your's.  And there isn't anything that really keeps this from
being put somewhere generically.

> +#define ENIC_ALLOC_ORDER	PAGE_ALLOC_COSTLY_ORDER

You talk about order(3) but then use PAGE_ALLOC_COSTLY_ORDER, which in
theory could change in the future.

But, in any event, there is no reason not to use NETDEV_FRAG_PAGE_MAX_ORDER,
just like __alloc_page_frag() does.

> +struct enic_alloc_cache {
> +	struct page_frag	frag;
> +	unsigned int		pagecnt_bias;
> +	int			dma_count;
> +	void			*va;
> +	dma_addr_t		pa;
> +};

Make this a generic structure, perhaps named something like
"netdev_dma_alloc_cache".

'pa' is not a good name for a DMA address, because it is not (necessarily)
a physical address.  It could be a virtual address translated by an IOMMU.
"dma_addr" is probably therefore a better member name.

In the generic version the driver will have to pass in a pointer to the
"netdev_dma_alloc_cache".  I would suggest having this embedded in the
driver per-queue structure rather than being allocated dynamically.

Then you can provide a netdev_dma_alloc_cache_init() the driver can
call which initializes this embedded object.

> +	ec->pa = pci_map_single(enic->pdev, ec->va, ec->frag.size,
> +				PCI_DMA_FROMDEVICE);

Next, these need to be converted to dma_*() calls, and the interface
for netdev_dma_alloc_cache() will need to have a "struct device *"
argument for these calls.

> @@ -199,6 +200,18 @@ void vnic_rq_clean(struct vnic_rq *rq,
>  		rq->ring.desc_avail++;
>  	}
>  
> +	if (rq->ec) {
> +		struct enic *enic = vnic_dev_priv(rq->vdev);
> +		struct enic_alloc_cache *ec = rq->ec;
> +
> +		WARN_ON(ec->dma_count);
> +		pci_unmap_single(enic->pdev, ec->pa, ec->frag.size,
> +				 PCI_DMA_FROMDEVICE);
> +		atomic_sub(ec->pagecnt_bias - 1, &ec->frag.page->_count);
> +		__free_pages(ec->frag.page, get_order(ec->frag.size));
> +		kfree(ec);
> +		rq->ec = NULL;
> +	}
>  	/* Use current fetch_index as the ring starting point */
>  	fetch_index = ioread32(&rq->ctrl->fetch_index);
>  

Finally, you'll need to define a "netdev_dma_alloc_cache_destroy()"
function which you'll call from here.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ