lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <35fbf36c-a908-443d-b903-9a5410af7cf4@intel.com>
Date: Wed, 3 Sep 2025 12:59:41 -0700
From: Jacob Keller <jacob.e.keller@...el.com>
To: Saeed Mahameed <saeed@...nel.org>
CC: Jakub Kicinski <kuba@...nel.org>, "David S. Miller" <davem@...emloft.net>,
	Paolo Abeni <pabeni@...hat.com>, Eric Dumazet <edumazet@...gle.com>, "Saeed
 Mahameed" <saeedm@...dia.com>, <netdev@...r.kernel.org>, Tariq Toukan
	<tariqt@...dia.com>, Gal Pressman <gal@...dia.com>, Leon Romanovsky
	<leonro@...dia.com>, Jiri Pirko <jiri@...dia.com>, Simon Horman
	<horms@...nel.org>
Subject: Re: [PATCH net-next V6 09/13] devlink: Add 'keep_link_up' generic
 devlink device param



On 9/2/2025 11:45 PM, Saeed Mahameed wrote:
> On 02 Sep 14:57, Jacob Keller wrote:
>> Intel has also tried something similar sounding with the
>> "link_down_on_close" in ethtool, which appears to be have made it in to
>> ice and i40e.. (I thought I remembered these flags being rejected but I
>> guess not?) I guess the ethtool flag is a bit difference since its
>> relating to driver behavior when you bring the port down
>> administratively, vs something like this which affects firmware control
>> of the link regardless of its state to the kernel.
>>
> 
> Interesting, it seems that i40/ice LINK_DOWN_ON_CLOSE and TOTAL_PORT_SHUTDOWN_ENA
> go hand in hand, tried to read the long comment in i40 but it is mostly
> about how these are implemented in both driver and FW/phy but not what they
> mean, what I am trying to understand is "LINK_DOWN_ON_CLOSE_ENA" is an
> 'enable' bit, it is off by default and an opt-in, does that mean by default 
> i40e/ice don't actually bring the link down on driver/unload or ndo->close
> ?
> 

I believe so. I can't recall the immediate behavior, and I know both
parameters are currently frowned on and only exist due to legacy of
merging them before this policy was widely enforced.

I believe the default is to leave the link up, and the flag changes
this. I remember vaguely some discussions we had about which approach
was better, and we had customers who each had different opinions.

I could be wrong though, and would need to verify this.

>>>>> This is not different as BMC is sort of multi-host, and physical link
>>>>> control here is delegated to the firmware.
>>>>>
>>>>> Also do we really want netdev to expose API for permanent nic tunables ?
>>>>> I thought this is why we invented devlink to offload raw NIC underlying
>>>>> tunables.
>>>>
>>>> Are you going to add devlink params for link config?
>>>> Its one of the things that's written into the NVMe, usually..
>>>
>>> No, the purpose of this NVM series is to setup FW boot parameters and not spec related
>>> tunables.
>>>
>>
>> This seems quite useful to me w.r.t to BMC access. I think its a stretch
>> to say this implies the desire to add many other knobs.
> 
> No sure if you are with or against the devlink knob ? :-)

I think a knob is a good idea, and I think it makes sense in devlink,
given that this applies to not just netdevice.

> But thanks for the i40e/ice pointers at least I know I am not alone on this
> boat..
> 

The argument that adding this knob implies we need a much more complex
link management scheme seems a little overkill to me.

Unfortunately, I think the i40e/ice stuff is perhaps slightly orthogonal
given that it applies mainly to the link behavior with software running.

This knob appears to be more about firmware behavior irrespective of
what if any software is running?


Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (237 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ