lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220713114804.11c7517e@kernel.org>
Date:   Wed, 13 Jul 2022 11:48:04 -0700
From:   Jakub Kicinski <kuba@...nel.org>
To:     Martin Habets <habetsm.xilinx@...il.com>
Cc:     Bjorn Helgaas <helgaas@...nel.org>, davem@...emloft.net,
        pabeni@...hat.com, edumazet@...gle.com, netdev@...r.kernel.org,
        ecree.xilinx@...il.com, linux-pci@...r.kernel.org,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH net-next v2 0/2] sfc: Add EF100 BAR config support

On Wed, 13 Jul 2022 09:40:01 +0100 Martin Habets wrote:
> > So it's switching between ethernet and vdpa? Isn't there a general
> > problem for configuring vdpa capabilities (net vs storage etc) and
> > shouldn't we seek to solve your BAR format switch in a similar fashion
> > rather than adding PCI device attrs, which I believe is not done for
> > anything vDPA-related?  
> 
> The initial support will be for vdpa net. vdpa block and RDMA will follow
> later, and we also need to consider FPGA management.
> 
> When it comes to vDPA there is a "vdpa" tool that we intend to support.
> This comes into play after we've switched a device into vdpa mode (using
> this new file).
> For a network device there is also "devlink" to consider. That could be used
> to switch a device into vdpa mode, but it cannot be used to switch it
> back (there is no netdev to operate on).
> My current understanding is that we won't have this issue for RDMA.
> For FPGA management there is no general configuration tool, just what
> fpga_mgr exposes (drivers/fpga). We intend to remove the special PF
> devices we have for this (PCI space is valuable), and use the normal
> network device in stead. I can give more details on this if you want.
> Worst case a special BAR config would be needed for this, but if needed I
> expect we can restrict this to the NIC provisioning stage.
> 
> So there is a general problem I think. The solution here is something at
> lower level, which is PCI in this case.
> Another solution would be a proprietary tool, something we are off course
> keen to avoid.

Okay. Indeed, we could easily bolt something onto devlink, I'd think
but I don't know the space enough to push for one solution over
another. 

Please try to document the problem and the solution... somewhere, tho.
Otherwise the chances that the next vendor with this problem follows
the same approach fall from low to none.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ