lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 4 Apr 2023 10:20:21 +0200
From:   Kal Cutter Conley <kal.conley@...tris.com>
To:     Magnus Karlsson <magnus.karlsson@...il.com>
Cc:     Björn Töpel <bjorn@...nel.org>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 08/10] xsk: Support UMEM chunk_size > PAGE_SIZE

> Is not the max 64K as you test against XDP_UMEM_MAX_CHUNK_SIZE in
> xdp_umem_reg()?

The absolute max is 64K. In the case of HPAGE_SIZE < 64K, then it
would be HPAGE_SIZE.

> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index e96a1151ec75..ed88880d4b68 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -28,6 +28,9 @@ struct xdp_umem {
> >         struct user_struct *user;
> >         refcount_t users;
> >         u8 flags;
> > +#ifdef CONFIG_HUGETLB_PAGE
>
> Sanity check: have you tried compiling your code without this config set?

Yes. The CI does this also on one of the platforms (hence some of the
bot errors in v1).

> >  static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address)
> >  {
> > +#ifdef CONFIG_HUGETLB_PAGE
>
> Let us try to get rid of most of these #ifdefs sprinkled around the
> code. How about hiding this inside xdp_umem_is_hugetlb() and get rid
> of these #ifdefs below? Since I believe it is quite uncommon not to
> have this config enabled, we could simplify things by always using the
> page_size in the pool, for example. And dito for the one in struct
> xdp_umem. What do you think?

I used #ifdef for `page_size` in the pool for maximum performance when
huge pages are disabled. We could also not worry about optimizing this
uncommon case though since the performance impact is very small.
However, I don't find the #ifdefs excessive either.

> > +static void xp_check_dma_contiguity(struct xsk_dma_map *dma_map, u32 page_size)
> >  {
> > -       u32 i;
> > +       u32 stride = page_size >> PAGE_SHIFT; /* in order-0 pages */
> > +       u32 i, j;
> >
> > -       for (i = 0; i < dma_map->dma_pages_cnt - 1; i++) {
> > -               if (dma_map->dma_pages[i] + PAGE_SIZE == dma_map->dma_pages[i + 1])
> > -                       dma_map->dma_pages[i] |= XSK_NEXT_PG_CONTIG_MASK;
> > -               else
> > -                       dma_map->dma_pages[i] &= ~XSK_NEXT_PG_CONTIG_MASK;
> > +       for (i = 0; i + stride < dma_map->dma_pages_cnt;) {
> > +               if (dma_map->dma_pages[i] + page_size == dma_map->dma_pages[i + stride]) {
> > +                       for (j = 0; j < stride; i++, j++)
> > +                               dma_map->dma_pages[i] |= XSK_NEXT_PG_CONTIG_MASK;
> > +               } else {
> > +                       for (j = 0; j < stride; i++, j++)
> > +                               dma_map->dma_pages[i] &= ~XSK_NEXT_PG_CONTIG_MASK;
> > +               }
>
> Still somewhat too conservative :-). If your page size is large you
> will waste a lot of the umem.  For the last page mark all the 4K
> "pages" that cannot cross the end of the umem due to the max size of a
> packet with the XSK_NEXT_PG_CONTIG_MASK bit. So you only need to add
> one more for-loop here to mark this, and then adjust the last for-loop
> below so it only marks the last bunch of 4K pages at the end of the
> umem as not contiguous.

I don't understand the issue. The XSK_NEXT_PG_CONTIG_MASK bit is only
looked at if the descriptor actually crosses a page boundary. I don't
think the current implementation wastes any UMEM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ