lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 11 Jan 2023 19:58:25 +0100
From:   Gerhard Engleder <gerhard@...leder-embedded.com>
To:     Alexander H Duyck <alexander.duyck@...il.com>,
        netdev@...r.kernel.org
Cc:     davem@...emloft.net, kuba@...nel.org, edumazet@...gle.com,
        pabeni@...hat.com
Subject: Re: [PATCH net-next v4 10/10] tsnep: Support XDP BPF program setup

On 11.01.23 00:00, Alexander H Duyck wrote:
> On Tue, 2023-01-10 at 22:38 +0100, Gerhard Engleder wrote:
>> On 10.01.23 18:33, Alexander H Duyck wrote:
>>> On Mon, 2023-01-09 at 20:15 +0100, Gerhard Engleder wrote:
>>>> Implement setup of BPF programs for XDP RX path with command
>>>> XDP_SETUP_PROG of ndo_bpf(). This is the final step for XDP RX path
>>>> support.
>>>>
>>>> tsnep_netdev_close() is called directly during BPF program setup. Add
>>>> netif_carrier_off() and netif_tx_stop_all_queues() calls to signal to
>>>> network stack that device is down. Otherwise network stack would
>>>> continue transmitting pakets.
>>>>
>>>> Return value of tsnep_netdev_open() is not checked during BPF program
>>>> setup like in other drivers. Forwarding the return value would result in
>>>> a bpf_prog_put() call in dev_xdp_install(), which would make removal of
>>>> BPF program necessary.
>>>>
>>>> If tsnep_netdev_open() fails during BPF program setup, then the network
>>>> stack would call tsnep_netdev_close() anyway. Thus, tsnep_netdev_close()
>>>> checks now if device is already down.
>>>>
>>>> Additionally remove $(tsnep-y) from $(tsnep-objs) because it is added
>>>> automatically.

<...>

>>>> +	netif_carrier_off(netdev);
>>>> +	netif_tx_stop_all_queues(netdev);
>>>>    
>>>
>>> As I called out earlier the __TSNEP_DOWN is just !IFF_UP so you don't
>>> need that bit.
>>>
>>> The fact that netif_carrier_off is here also points out the fact that
>>> the code in the Tx path isn't needed regarding __TSNEP_DOWN and you can
>>> probably just check netif_carrier_ok if you need the check.
>>
>> tsnep_netdev_close() is called directly during bpf prog setup (see
>> tsnep_xdp_setup_prog() in this commit). If the following
>> tsnep_netdev_open() call fails, then this flag signals that the device
>> is already down and nothing needs to be cleaned up if
>> tsnep_netdev_close() is called later (because IFF_UP is still set).
> 
> If the call to close was fouled up you should probably be blocking
> access to the device via at least a netif_device_detach. I susppose you
> could use the __LINK_STATE_PRESENT bit as the inverse of the
> __TSNEP_DOWN bit. If your open fails you clean up, detatch the device,
> and in the close path you only run through it if the device is present.
> 
> Basically what we want to avoid is adding a bunch of extra state as
> what we tend to see is that it will start to create a snarl as you add
> more and more layers.

To be honest, I cannot argue that __TSNEP_DOWN is great solution. It is
may more about fighting symptoms from the

	close()
	change config
	open()

style which according to Jabuk should be prevented anyway. I will
suggest a different solution as a reply to Jakubs comment.

Gerhard

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ