lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Jan 2023 15:00:42 -0800
From:   Alexander H Duyck <alexander.duyck@...il.com>
To:     Gerhard Engleder <gerhard@...leder-embedded.com>,
        netdev@...r.kernel.org
Cc:     davem@...emloft.net, kuba@...nel.org, edumazet@...gle.com,
        pabeni@...hat.com
Subject: Re: [PATCH net-next v4 10/10] tsnep: Support XDP BPF program setup

On Tue, 2023-01-10 at 22:38 +0100, Gerhard Engleder wrote:
> On 10.01.23 18:33, Alexander H Duyck wrote:
> > On Mon, 2023-01-09 at 20:15 +0100, Gerhard Engleder wrote:
> > > Implement setup of BPF programs for XDP RX path with command
> > > XDP_SETUP_PROG of ndo_bpf(). This is the final step for XDP RX path
> > > support.
> > > 
> > > tsnep_netdev_close() is called directly during BPF program setup. Add
> > > netif_carrier_off() and netif_tx_stop_all_queues() calls to signal to
> > > network stack that device is down. Otherwise network stack would
> > > continue transmitting pakets.
> > > 
> > > Return value of tsnep_netdev_open() is not checked during BPF program
> > > setup like in other drivers. Forwarding the return value would result in
> > > a bpf_prog_put() call in dev_xdp_install(), which would make removal of
> > > BPF program necessary.
> > > 
> > > If tsnep_netdev_open() fails during BPF program setup, then the network
> > > stack would call tsnep_netdev_close() anyway. Thus, tsnep_netdev_close()
> > > checks now if device is already down.
> > > 
> > > Additionally remove $(tsnep-y) from $(tsnep-objs) because it is added
> > > automatically.
> > > 
> > > Test results with A53 1.2GHz:
> > > 
> > > XDP_DROP (samples/bpf/xdp1)
> > > proto 17:     883878 pkt/s
> > > 
> > > XDP_TX (samples/bpf/xdp2)
> > > proto 17:     255693 pkt/s
> > > 
> > > XDP_REDIRECT (samples/bpf/xdpsock)
> > >   sock0@...2:0 rxdrop xdp-drv
> > >                     pps            pkts           1.00
> > > rx                 855,582        5,404,523
> > > tx                 0              0
> > > 
> > > XDP_REDIRECT (samples/bpf/xdp_redirect)
> > > eth2->eth1         613,267 rx/s   0 err,drop/s   613,272 xmit/s
> > > 
> > > Signed-off-by: Gerhard Engleder <gerhard@...leder-embedded.com>
> > > ---
> > >   drivers/net/ethernet/engleder/Makefile     |  2 +-
> > >   drivers/net/ethernet/engleder/tsnep.h      |  6 +++++
> > >   drivers/net/ethernet/engleder/tsnep_main.c | 25 ++++++++++++++++---
> > >   drivers/net/ethernet/engleder/tsnep_xdp.c  | 29 ++++++++++++++++++++++
> > >   4 files changed, 58 insertions(+), 4 deletions(-)
> > >   create mode 100644 drivers/net/ethernet/engleder/tsnep_xdp.c
> > > 
> > > 
> > 
> > <...>
> > 
> > > --- a/drivers/net/ethernet/engleder/tsnep_main.c
> > > +++ b/drivers/net/ethernet/engleder/tsnep_main.c
> > > @@ -1373,7 +1373,7 @@ static void tsnep_free_irq(struct tsnep_queue *queue, bool first)
> > >   	memset(queue->name, 0, sizeof(queue->name));
> > >   }
> > >   
> > > -static int tsnep_netdev_open(struct net_device *netdev)
> > > +int tsnep_netdev_open(struct net_device *netdev)
> > >   {
> > >   	struct tsnep_adapter *adapter = netdev_priv(netdev);
> > >   	int tx_queue_index = 0;
> > > @@ -1436,6 +1436,8 @@ static int tsnep_netdev_open(struct net_device *netdev)
> > >   		tsnep_enable_irq(adapter, adapter->queue[i].irq_mask);
> > >   	}
> > >   
> > > +	netif_tx_start_all_queues(adapter->netdev);
> > > +
> > >   	clear_bit(__TSNEP_DOWN, &adapter->state);
> > >   
> > >   	return 0;
> > > @@ -1457,12 +1459,16 @@ static int tsnep_netdev_open(struct net_device *netdev)
> > >   	return retval;
> > >   }
> > >   
> > > -static int tsnep_netdev_close(struct net_device *netdev)
> > > +int tsnep_netdev_close(struct net_device *netdev)
> > >   {
> > >   	struct tsnep_adapter *adapter = netdev_priv(netdev);
> > >   	int i;
> > >   
> > > -	set_bit(__TSNEP_DOWN, &adapter->state);
> > > +	if (test_and_set_bit(__TSNEP_DOWN, &adapter->state))
> > > +		return 0;
> > > +
> > > +	netif_carrier_off(netdev);
> > > +	netif_tx_stop_all_queues(netdev);
> > >   
> > 
> > As I called out earlier the __TSNEP_DOWN is just !IFF_UP so you don't
> > need that bit.
> > 
> > The fact that netif_carrier_off is here also points out the fact that
> > the code in the Tx path isn't needed regarding __TSNEP_DOWN and you can
> > probably just check netif_carrier_ok if you need the check.
> 
> tsnep_netdev_close() is called directly during bpf prog setup (see
> tsnep_xdp_setup_prog() in this commit). If the following
> tsnep_netdev_open() call fails, then this flag signals that the device
> is already down and nothing needs to be cleaned up if
> tsnep_netdev_close() is called later (because IFF_UP is still set).

If the call to close was fouled up you should probably be blocking
access to the device via at least a netif_device_detach. I susppose you
could use the __LINK_STATE_PRESENT bit as the inverse of the
__TSNEP_DOWN bit. If your open fails you clean up, detatch the device,
and in the close path you only run through it if the device is present.

Basically what we want to avoid is adding a bunch of extra state as
what we tend to see is that it will start to create a snarl as you add
more and more layers.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ