lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 24 Jul 2023 17:44:50 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, 
	pabeni@...hat.com, hawk@...nel.org, ilias.apalodimas@...aro.org, 
	daniel@...earbox.net, ast@...nel.org, netdev@...r.kernel.org
Subject: Re: [RFC PATCH net-next 2/2] net: veth: Improving page pool recycling

On Fri, Jul 21, 2023 at 8:18 PM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>
> On 2023/7/19 15:29, Liang Chen wrote:
>
> ...
>
> >
> > The reason behind is some skbs received from the veth peer are not page
> > pool pages, and remain so after conversion to xdp frame. In order to not
> > confusing __xdp_return with mixed regular pages and page pool pages, they
> > are all converted to regular pages. So registering xdp memory model as
> > MEM_TYPE_PAGE_SHARED is sufficient.
> >
> > If we replace the above code with kfree_skb_partial, directly releasing
> > the skb data structure, we can retain the original page pool page behavior.
> > However, directly changing the xdp memory model to MEM_TYPE_PAGE_POOL is
> > not a solution as explained above. Therefore, we introduced an additionally
> > MEM_TYPE_PAGE_POOL model for each rq.
> >
>
> ...
>
> > @@ -874,9 +862,9 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
> >               rcu_read_unlock();
> >               goto xdp_xmit;
> >       case XDP_REDIRECT:
> > -             veth_xdp_get(xdp);
> > -             consume_skb(skb);
> > -             xdp->rxq->mem = rq->xdp_mem;
> > +             xdp->rxq->mem = skb->pp_recycle ? rq->xdp_mem_pp : rq->xdp_mem;
>
> I am not really familiar with the veth here, so some question here:
> Is it possible that skbs received from the veth peer are also page pool pages?
> Does using the local rq->xdp_mem_pp for page allocated from veth peer cause
> some problem here? As there is type and id for a specific page_pool instance,
> type may be the same, but I suppose id is not the same for veth and it's veth
> peer.
>

Yeah, I understand your concern. If a skb uses a page pool page whose
pool has previously been registered with a xdp memory model, this will
lead to a situation where veth compose a xdp frame indicating its
buffer coming from the local xdp_mem_pp pool from its xdp_mem_info.id
field, and the page structure itself refers to another pool from where
it was originally allocated. This may cause problems for things like
xdp_return_frame_bulk. We will address it in V2. Thank you for
bringing up this issue.


Thanks,
Liang

> > +             kfree_skb_partial(skb, true);
> > +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ