[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<PAXPR04MB8510A5B9BF41C51BE8945D158893A@PAXPR04MB8510.eurprd04.prod.outlook.com>
Date: Mon, 26 Jan 2026 02:22:57 +0000
From: Wei Fang <wei.fang@....com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "john.fastabend@...il.com" <john.fastabend@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Shenwei Wang
<shenwei.wang@....com>, "daniel@...earbox.net" <daniel@...earbox.net>, Frank
Li <frank.li@....com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Clark Wang <xiaoning.wang@....com>, "ast@...nel.org" <ast@...nel.org>,
"sdf@...ichev.me" <sdf@...ichev.me>, "imx@...ts.linux.dev"
<imx@...ts.linux.dev>, "davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>, "hawk@...nel.org"
<hawk@...nel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "andrew+netdev@...n.ch"
<andrew+netdev@...n.ch>
Subject: RE: [v4,net-next,12/15] net: fec: move xdp_rxq_info* APIs out of
fec_enet_create_page_pool()
> > +static int
> > +fec_enet_alloc_rxq_buffers(struct net_device *ndev, unsigned int queue)
> > +{
> > + struct fec_enet_private *fep = netdev_priv(ndev);
> > + struct fec_enet_priv_rx_q *rxq;
> > + int err;
> > +
> > + rxq = fep->rx_queue[queue];
> > + err = fec_alloc_rxq_buffers_pp(fep, rxq);
> > + if (err)
> > + return err;
> > +
> > + err = fec_xdp_rxq_info_reg(fep, rxq);
> > + if (err) {
> > + fec_free_rxq_buffers(rxq);
> > +
> > + return err;
> > + }
> > +
> > + return 0;
> > +}
>
> Could there be a memory leak here on systems with multiple RX queues? The
> old code had:
>
> err_alloc:
> fec_enet_free_buffers(ndev);
> return -ENOMEM;
>
> which freed all queues (both RX and TX). The new code calls
> fec_free_rxq_buffers(rxq), which only frees the current queue.
>
> If fec_enet_alloc_buffers() calls this function in a loop for multiple
> queues and queue N fails after queues 0 through N-1 succeeded, only queue
> N gets cleaned up. The caller fec_enet_alloc_buffers() just returns
> -ENOMEM without freeing the previously allocated queues:
>
> for (i = 0; i < fep->num_rx_queues; i++)
> if (fec_enet_alloc_rxq_buffers(ndev, i))
> return -ENOMEM;
>
> And fec_enet_open() jumps to err_enet_alloc which does not call
> fec_enet_free_buffers(), leaving the page pools and pages from the
> successfully allocated queues leaked.
Yes, this is real a potential bug, I will fix it.
Powered by blists - more mailing lists