lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEsDB+tMFEvRtyDAz83dkd9fpuh51u=KyUZkgh+gizmK7g@mail.gmail.com>
Date: Fri, 19 Apr 2024 13:46:25 +0800
From: Jason Wang <jasowang@...hat.com>
To: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Cc: virtualization@...ts.linux.dev, "Michael S. Tsirkin" <mst@...hat.com>, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org
Subject: Re: [PATCH vhost 4/6] virtio_net: big mode support premapped

On Fri, Apr 19, 2024 at 12:23 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
>
> On Fri, 19 Apr 2024 08:43:43 +0800, Jason Wang <jasowang@...hat.com> wrote:
> > On Thu, Apr 18, 2024 at 4:35 PM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> > >
> > > On Thu, 18 Apr 2024 14:25:06 +0800, Jason Wang <jasowang@...hat.com> wrote:
> > > > On Thu, Apr 11, 2024 at 10:51 AM Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> > > > >
> > > > > In big mode, pre-mapping DMA is beneficial because if the pages are not
> > > > > used, we can reuse them without needing to unmap and remap.
> > > > >
> > > > > We require space to store the DMA address. I use the page.dma_addr to
> > > > > store the DMA address from the pp structure inside the page.
> > > > >
> > > > > Every page retrieved from get_a_page() is mapped, and its DMA address is
> > > > > stored in page.dma_addr. When a page is returned to the chain, we check
> > > > > the DMA status; if it is not mapped (potentially having been unmapped),
> > > > > we remap it before returning it to the chain.
> > > > >
> > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> > > > > ---
> > > > >  drivers/net/virtio_net.c | 98 +++++++++++++++++++++++++++++++++-------
> > > > >  1 file changed, 81 insertions(+), 17 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > index 4446fb54de6d..7ea7e9bcd5d7 100644
> > > > > --- a/drivers/net/virtio_net.c
> > > > > +++ b/drivers/net/virtio_net.c
> > > > > @@ -50,6 +50,7 @@ module_param(napi_tx, bool, 0644);
> > > > >
> > > > >  #define page_chain_next(p)     ((struct page *)((p)->pp))
> > > > >  #define page_chain_add(p, n)   ((p)->pp = (void *)n)
> > > > > +#define page_dma_addr(p)       ((p)->dma_addr)
> > > > >
> > > > >  /* RX packet size EWMA. The average packet size is used to determine the packet
> > > > >   * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > > > > @@ -434,6 +435,46 @@ skb_vnet_common_hdr(struct sk_buff *skb)
> > > > >         return (struct virtio_net_common_hdr *)skb->cb;
> > > > >  }
> > > > >
> > > > > +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> > > > > +{
> > > > > +       sg->dma_address = addr;
> > > > > +       sg->length = len;
> > > > > +}
> > > > > +
> > > > > +static void page_chain_unmap(struct receive_queue *rq, struct page *p)
> > > > > +{
> > > > > +       virtqueue_dma_unmap_page_attrs(rq->vq, page_dma_addr(p), PAGE_SIZE,
> > > > > +                                      DMA_FROM_DEVICE, 0);
> > > > > +
> > > > > +       page_dma_addr(p) = DMA_MAPPING_ERROR;
> > > > > +}
> > > > > +
> > > > > +static int page_chain_map(struct receive_queue *rq, struct page *p)
> > > > > +{
> > > > > +       dma_addr_t addr;
> > > > > +
> > > > > +       addr = virtqueue_dma_map_page_attrs(rq->vq, p, 0, PAGE_SIZE, DMA_FROM_DEVICE, 0);
> > > > > +       if (virtqueue_dma_mapping_error(rq->vq, addr))
> > > > > +               return -ENOMEM;
> > > > > +
> > > > > +       page_dma_addr(p) = addr;
> > > > > +       return 0;
> > > > > +}
> > > > > +
> > > > > +static void page_chain_release(struct receive_queue *rq)
> > > > > +{
> > > > > +       struct page *p, *n;
> > > > > +
> > > > > +       for (p = rq->pages; p; p = n) {
> > > > > +               n = page_chain_next(p);
> > > > > +
> > > > > +               page_chain_unmap(rq, p);
> > > > > +               __free_pages(p, 0);
> > > > > +       }
> > > > > +
> > > > > +       rq->pages = NULL;
> > > > > +}
> > > > > +
> > > > >  /*
> > > > >   * put the whole most recent used list in the beginning for reuse
> > > > >   */
> > > > > @@ -441,6 +482,13 @@ static void give_pages(struct receive_queue *rq, struct page *page)
> > > > >  {
> > > > >         struct page *end;
> > > > >
> > > > > +       if (page_dma_addr(page) == DMA_MAPPING_ERROR) {
> > > >
> > > > This looks strange, the map should be done during allocation. Under
> > > > which condition could we hit this?
> > >
> > > This first page is umapped before we call page_to_skb().
> > > The page can be put back to the link in case of failure.
> >
> > See below.
> >
> > >
> > >
> > > >
> > > > > +               if (page_chain_map(rq, page)) {
> > > > > +                       __free_pages(page, 0);
> > > > > +                       return;
> > > > > +               }
> > > > > +       }
> > > > > +
> > > > >         /* Find end of list, sew whole thing into vi->rq.pages. */
> > > > >         for (end = page; page_chain_next(end); end = page_chain_next(end));
> > > > >
> > > > > @@ -456,8 +504,15 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
> > > > >                 rq->pages = page_chain_next(p);
> > > > >                 /* clear chain here, it is used to chain pages */
> > > > >                 page_chain_add(p, NULL);
> > > > > -       } else
> > > > > +       } else {
> > > > >                 p = alloc_page(gfp_mask);
> > > > > +
> > > > > +               if (page_chain_map(rq, p)) {
> > > > > +                       __free_pages(p, 0);
> > > > > +                       return NULL;
> > > > > +               }
> > > > > +       }
> > > > > +
> > > > >         return p;
> > > > >  }
> > > > >
> > > > > @@ -613,8 +668,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
> > > > >                         return NULL;
> > > > >
> > > > >                 page = page_chain_next(page);
> > > > > -               if (page)
> > > > > -                       give_pages(rq, page);
> > > > >                 goto ok;
> > > > >         }
> > > > >
> > > > > @@ -640,6 +693,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
> > > > >                         skb_add_rx_frag(skb, 0, page, offset, len, truesize);
> > > > >                 else
> > > > >                         page_to_free = page;
> > > > > +               page = NULL;
> > > > >                 goto ok;
> > > > >         }
> > > > >
> > > > > @@ -657,6 +711,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
> > > > >         BUG_ON(offset >= PAGE_SIZE);
> > > > >         while (len) {
> > > > >                 unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, len);
> > > > > +
> > > > > +               /* unmap the page before using it. */
> > > > > +               if (!offset)
> > > > > +                       page_chain_unmap(rq, page);
> > > > > +
> > > >
> > > > This sounds strange, do we need a virtqueue_sync_for_cpu() helper here?
> > >
> > > I think we do not need that. Because the umap api does it.
> > > We do not work with DMA_SKIP_SYNC;
> >
> > Well, the problem is unmap is too heavyweight and it reduces the
> > effort of trying to avoid map/umaps as much as possible.
> >
> > For example, for most of the case DMA sync is just a nop. And such
> > unmap() cause strange code in give_pages() as we discuss above?
>
> YES. You are right. For the first page, we just need to sync for cpu.
> And we do not need to check the dma status.
> But here (in page_to_skb), we need to call unmap, because this page is put
> to the skb.

Right, but issue still,

The only case that we may hit

        if (page_dma_addr(page) == DMA_MAPPING_ERROR)

is when the packet is smaller than GOOD_COPY_LEN.

So if we sync_for_cpu for the head page, we don't do:

1) unmap in the receive_big()
2) do snyc_for_cpu() just before skb_put_data(), so the page could be
recycled to the pool without unmapping?

And I think we should do something similar for the mergeable case?

Btw, I found one the misleading comment introduced by f80bd740cb7c9

        /* copy small packet so we can reuse these pages */
        if (!NET_IP_ALIGN && len > GOOD_COPY_LEN && tailroom >= shinfo_size) {

We're not copying but building skb around the head page.

Thanks

>
> Thanks.
>
>
> >
> > Thanks
> >
>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ