[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200123062017.3cbefe70@cakuba>
Date: Thu, 23 Jan 2020 06:20:17 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Sunil Kovvuri <sunil.kovvuri@...il.com>
Cc: Linux Netdev List <netdev@...r.kernel.org>,
"David S. Miller" <davem@...emloft.net>,
Michal Kubecek <mkubecek@...e.cz>,
Sunil Goutham <sgoutham@...vell.com>,
Geetha sowjanya <gakula@...vell.com>
Subject: Re: [PATCH v4 04/17] octeontx2-pf: Initialize and config queues
On Thu, 23 Jan 2020 00:59:54 +0530, Sunil Kovvuri wrote:
> On Tue, Jan 21, 2020 at 9:31 PM Jakub Kicinski <kuba@...nel.org> wrote:
> > On Tue, 21 Jan 2020 18:51:38 +0530, sunil.kovvuri@...il.com wrote:
> > > +dma_addr_t otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> > > + gfp_t gfp)
> > > +{
> > > + dma_addr_t iova;
> > > +
> > > + /* Check if request can be accommodated in previous allocated page */
> > > + if (pool->page &&
> > > + ((pool->page_offset + pool->rbsize) <= PAGE_SIZE)) {
You use straight PAGE_SIZE here
> > > + pool->pageref++;
> > > + goto ret;
> > > + }
> > > +
> > > + otx2_get_page(pool);
> > > +
> > > + /* Allocate a new page */
> > > + pool->page = alloc_pages(gfp | __GFP_COMP | __GFP_NOWARN,
> > > + pool->rbpage_order);
but allocate with order
> > > + if (unlikely(!pool->page))
> > > + return -ENOMEM;
> > > +
> > > + pool->page_offset = 0;
> > > +ret:
> > > + iova = (u64)otx2_dma_map_page(pfvf, pool->page, pool->page_offset,
> > > + pool->rbsize, DMA_FROM_DEVICE);
> > > + if (!iova) {
> > > + if (!pool->page_offset)
> > > + __free_pages(pool->page, pool->rbpage_order);
> > > + pool->page = NULL;
> > > + return -ENOMEM;
> > > + }
> > > + pool->page_offset += pool->rbsize;
> > > + return iova;
> > > +}
> >
> > You don't seem to be doing any page recycling if I'm reading this right.
> > Can't you use the standard in-kernel page frag allocator
> > (netdev_alloc_frag/napi_alloc_frag)?
>
> netdev_alloc_frag() is costly.
> eg: it does updates to page's refcount per frag allocation.
It would be nice to see it improve rather than have drivers implement
a slight variation of the scheme :/
Powered by blists - more mailing lists