lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UfeAus3GtVqBAtPA6mFC=kJ8YEq-Sdr6+A6O83JjyutAA@mail.gmail.com>
Date:   Tue, 10 Jan 2023 14:27:51 -0800
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Gerhard Engleder <gerhard@...leder-embedded.com>
Cc:     netdev@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
        edumazet@...gle.com, pabeni@...hat.com,
        Saeed Mahameed <saeed@...nel.org>
Subject: Re: [PATCH net-next v4 08/10] tsnep: Add RX queue info for XDP support

On Tue, Jan 10, 2023 at 1:21 PM Gerhard Engleder
<gerhard@...leder-embedded.com> wrote:
>
> On 10.01.23 18:29, Alexander H Duyck wrote:
> > On Mon, 2023-01-09 at 20:15 +0100, Gerhard Engleder wrote:
> >> Register xdp_rxq_info with page_pool memory model. This is needed for
> >> XDP buffer handling.
> >>

<...>

> >> @@ -1317,6 +1333,8 @@ static int tsnep_netdev_open(struct net_device *netdev)
> >>                      tsnep_rx_close(adapter->queue[i].rx);
> >>              if (adapter->queue[i].tx)
> >>                      tsnep_tx_close(adapter->queue[i].tx);
> >> +
> >> +            netif_napi_del(&adapter->queue[i].napi);
> >>      }
> >>      return retval;
> >>   }
> >> @@ -1335,7 +1353,6 @@ static int tsnep_netdev_close(struct net_device *netdev)
> >>              tsnep_disable_irq(adapter, adapter->queue[i].irq_mask);
> >>
> >>              napi_disable(&adapter->queue[i].napi);
> >> -            netif_napi_del(&adapter->queue[i].napi);
> >>
> >>              tsnep_free_irq(&adapter->queue[i], i == 0);
> >>
> >
> > Likewise here you could take care of all the same items with the page
> > pool being freed after you have already unregistered and freed the napi
> > instance.
>
> I'm not sure if I understand it right. According to your suggestion
> above napi and xdp_rxq_info should be freed here?

Right. Between the napi_disable and the netif_napi_del you could
unregister the mem model and the xdp_info. Basically the two are tried
closer to the NAPI instance than the Rx queue itself so it would make
sense to just take care of it here. You might look at just putting
together a function to handle all of it since then you just pass
&adapter->queue once and then use a local variable in the function.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ