lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <05dae177-ff31-4ad8-98f2-c93e14ea37ce@amd.com>
Date: Mon, 3 Jun 2024 17:33:21 -0700
From: "Nelson, Shannon" <shannon.nelson@....com>
To: Joshua Hay <joshua.a.hay@...el.com>, intel-wired-lan@...ts.osuosl.org
Cc: netdev@...r.kernel.org, Sridhar Samudrala <sridhar.samudrala@...el.com>
Subject: Re: [PATCH iwl-net] idpf: extend tx watchdog timeout

On 6/3/2024 11:47 AM, Joshua Hay wrote:
> 
> There are several reasons for a TX completion to take longer than usual
> to be written back by HW. For example, the completion for a packet that
> misses a rule will have increased latency. The side effect of these
> variable latencies for any given packet is out of order completions. The
> stack sends packet X and Y. If packet X takes longer because of the rule
> miss in the example above, but packet Y hits, it can go on the wire
> immediately. Which also means it can be completed first.  The driver
> will then receive a completion for packet Y before packet X.  The driver
> will stash the buffers for packet X in a hash table to allow the tx send
> queue descriptors for both packet X and Y to be reused. The driver will
> receive the completion for packet X sometime later and have to search
> the hash table for the associated packet.
> 
> The driver cleans packets directly on the ring first, i.e. not out of
> order completions since they are to some extent considered "slow(er)
> path". However, certain workloads can increase the frequency of out of
> order completions thus introducing even more latency into the cleaning
> path. Bump up the timeout value to account for these workloads.
> 
> Fixes: 0fe45467a104 ("idpf: add create vport and netdev configuration")
> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@...el.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@...el.com>
> ---
>   drivers/net/ethernet/intel/idpf/idpf_lib.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> index f1ee5584e8fa..3d4ae2ed9b96 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
> @@ -770,8 +770,8 @@ static int idpf_cfg_netdev(struct idpf_vport *vport)
>          else
>                  netdev->netdev_ops = &idpf_netdev_ops_singleq;
> 
> -       /* setup watchdog timeout value to be 5 second */
> -       netdev->watchdog_timeo = 5 * HZ;
> +       /* setup watchdog timeout value to be 30 seconds */
> +       netdev->watchdog_timeo = 30 * HZ;

Huh... that's a pretty big number.  If it really needs to be that big I 
wonder if there's something else that needs attention.

sln


> 
>          netdev->dev_port = idx;
> 
> --
> 2.39.2
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ