lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1675407676.377156-1-xuanzhuo@linux.alibaba.com>
Date:   Fri, 3 Feb 2023 15:01:16 +0800
From:   Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
To:     Magnus Karlsson <magnus.karlsson@...il.com>
Cc:     netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Björn Töpel <bjorn@...nel.org>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Menglong Dong <imagedong@...cent.com>,
        Kuniyuki Iwashima <kuniyu@...zon.com>,
        Petr Machata <petrm@...dia.com>,
        virtualization@...ts.linux-foundation.org, bpf@...r.kernel.org
Subject: Re: [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync

On Thu, 2 Feb 2023 13:51:20 +0100, Magnus Karlsson <magnus.karlsson@...il.com> wrote:
> On Thu, 2 Feb 2023 at 12:05, Xuan Zhuo <xuanzhuo@...ux.alibaba.com> wrote:
> >
> > Use callback to implement dma sync to simplify subsequent support for
> > virtio dma sync.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
> > ---
> >  include/net/xsk_buff_pool.h |  6 ++++++
> >  net/xdp/xsk_buff_pool.c     | 24 ++++++++++++++++++++----
> >  2 files changed, 26 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
> > index 3e952e569418..53b681120354 100644
> > --- a/include/net/xsk_buff_pool.h
> > +++ b/include/net/xsk_buff_pool.h
> > @@ -75,6 +75,12 @@ struct xsk_buff_pool {
> >         u32 chunk_size;
> >         u32 chunk_shift;
> >         u32 frame_len;
> > +       void (*dma_sync_for_cpu)(struct device *dev, dma_addr_t addr,
> > +                                unsigned long offset, size_t size,
> > +                                enum dma_data_direction dir);
> > +       void (*dma_sync_for_device)(struct device *dev, dma_addr_t addr,
> > +                                   unsigned long offset, size_t size,
> > +                                   enum dma_data_direction dir);
>
> If we put these two pointers here, the number of cache lines required
> in the data path for this struct will be increased from 2 to 3 which
> will likely affect performance negatively. These sync operations are
> also not used on most systems. So how about we put them in the first
> section of this struct labeled "Members only used in the control path
> first." instead. There is a 26-byte hole at the end of it that can be
> used.


Will fix.

Thanks.


>
> >         u8 cached_need_wakeup;
> >         bool uses_need_wakeup;
> >         bool dma_need_sync;
> > diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> > index ed6c71826d31..78e325e195fa 100644
> > --- a/net/xdp/xsk_buff_pool.c
> > +++ b/net/xdp/xsk_buff_pool.c
> > @@ -403,6 +403,20 @@ static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_
> >         return 0;
> >  }
> >
> > +static void dma_sync_for_cpu(struct device *dev, dma_addr_t addr,
> > +                            unsigned long offset, size_t size,
> > +                            enum dma_data_direction dir)
> > +{
> > +       dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
> > +}
> > +
> > +static void dma_sync_for_device(struct device *dev, dma_addr_t addr,
> > +                               unsigned long offset, size_t size,
> > +                               enum dma_data_direction dir)
> > +{
> > +       dma_sync_single_range_for_device(dev, addr, offset, size, dir);
> > +}
> > +
> >  int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
> >                unsigned long attrs, struct page **pages, u32 nr_pages)
> >  {
> > @@ -421,6 +435,9 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
> >                 return 0;
> >         }
> >
> > +       pool->dma_sync_for_cpu = dma_sync_for_cpu;
> > +       pool->dma_sync_for_device = dma_sync_for_device;
> > +
> >         dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
> >         if (!dma_map)
> >                 return -ENOMEM;
> > @@ -667,15 +684,14 @@ EXPORT_SYMBOL(xp_raw_get_dma);
> >
> >  void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb)
> >  {
> > -       dma_sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0,
> > -                                     xskb->pool->frame_len, DMA_BIDIRECTIONAL);
> > +       xskb->pool->dma_sync_for_cpu(xskb->pool->dev, xskb->dma, 0,
> > +                                    xskb->pool->frame_len, DMA_BIDIRECTIONAL);
> >  }
> >  EXPORT_SYMBOL(xp_dma_sync_for_cpu_slow);
> >
> >  void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma,
> >                                  size_t size)
> >  {
> > -       dma_sync_single_range_for_device(pool->dev, dma, 0,
> > -                                        size, DMA_BIDIRECTIONAL);
> > +       pool->dma_sync_for_device(pool->dev, dma, 0, size, DMA_BIDIRECTIONAL);
> >  }
> >  EXPORT_SYMBOL(xp_dma_sync_for_device_slow);
> > --
> > 2.32.0.3.g01195cf9f
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ