lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240229174914.3a9cb61e@kernel.org>
Date: Thu, 29 Feb 2024 17:49:14 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Vadim Fedorenko <vadim.fedorenko@...ux.dev>
Cc: Jiri Pirko <jiri@...nulli.us>, Michael Chan <michael.chan@...adcom.com>,
 davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
 pabeni@...hat.com, pavan.chebbi@...adcom.com,
 andrew.gospodarek@...adcom.com, richardcochran@...il.com
Subject: Re: [PATCH net-next 1/2] bnxt_en: Introduce devlink runtime driver
 param to set ptp tx timeout

On Thu, 29 Feb 2024 21:22:19 +0000 Vadim Fedorenko wrote:
> > Perhaps, but also I think it's fairly impractical. Specialized users may
> > be able to tune this, but in DC environment PTP is handled at the host  
> 
> That's correct, only 1 app is actually doing syncronization
> 
> > level, and the applications come and go. So all the poor admin can do  
> 
> Container/VM level applications don't care about PTP packets timestamps.
> They only care about the time being synchronized.

What I was saying is that in the PTP daemon you don't know whether
the app running is likely to cause delays or not, and how long.

> > is set this to the max value. While in the driver you can actually try  
> 
> Pure admin will tune it according to the host level app configuration
> which may differ because of environment.

Concrete example?

> > to be a bit more intelligent. Expecting the user to tune this strikes me
> > as trying to take the easy way out..  
> 
> There is no actual way for application to signal down to driver that it
> gave up waiting for TX timestamp, what other kind of smartness can we
> expect here?

Let's figure out why the timeouts happen, before we create uAPIs.
If it's because there's buffer bloat or a pause storm, the next TS
request that gets queued will get stuck in the same exact way.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ