lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250925191219.13a29106@kernel.org>
Date: Thu, 25 Sep 2025 19:12:19 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Octavian Purdila <tavip@...gle.com>, <davem@...emloft.net>,
 <edumazet@...gle.com>, <pabeni@...hat.com>, <horms@...nel.org>,
 <ast@...nel.org>, <daniel@...earbox.net>, <hawk@...nel.org>,
 <john.fastabend@...il.com>, <sdf@...ichev.me>, <ahmed.zaki@...el.com>,
 <aleksander.lobakin@...el.com>, <toke@...hat.com>, <lorenzo@...nel.org>,
 <netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
 <syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com>, Kuniyuki Iwashima
 <kuniyu@...gle.com>
Subject: Re: [PATCH net] xdp: use multi-buff only if receive queue supports
 page pool

On Thu, 25 Sep 2025 11:42:04 +0200 Maciej Fijalkowski wrote:
> On Thu, Sep 25, 2025 at 12:53:53AM -0700, Octavian Purdila wrote:
> > On Wed, Sep 24, 2025 at 5:09 PM Jakub Kicinski <kuba@...nel.org> wrote:  
> > >
> > > On Wed, 24 Sep 2025 06:08:42 +0000 Octavian Purdila wrote:  
>  [...]  
> > >
> > > This can also happen on veth, right? And veth re-stamps the Rx queues.  
> 
> What do you mean by 're-stamps' in this case?
> 
> > 
> > I am not sure if re-stamps will have ill effects.
> > 
> > The allocation and deallocation for this issue happens while
> > processing the same packet (receive skb -> skb_pp_cow_data ->
> > page_pool alloc ... __bpf_prog_run ->  bpf_xdp_adjust_tail).
> > 
> > IIUC, if the veth re-stamps the RX queue to MEM_TYPE_PAGE_POOL
> > skb_pp_cow_data will proceed to allocate from page_pool and
> > bpf_xdp_adjust_tail will correctly free from page_pool.  
> 
> netif_get_rxqueue() gives you a pointer the netstack queue, not the driver
> one. Then you take the xdp_rxq from there. Do we even register memory
> model for these queues? Or am I missing something here.
> 
> We're in generic XDP hook where driver specifics should not matter here
> IMHO.

Well, IDK how helpful the flow below would be but:

veth_xdp_xmit() -> [ptr ring] -> veth_xdp_rcv() -> veth_xdp_rcv_one() 
                                                               |
                            | xdp_convert_frame_to_buff()   <-'
    ( "re-stamps" ;) ->     | xdp->rxq = &rq->xdp_rxq;
  can eat frags but now rxq | bpf_prog_run_xdp()
         is veth's          |

I just glanced at the code so >50% changes I'm wrong, but that's what 
I meant.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ