lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZguNE7mnDwNaT+/7@yilunxu-OptiPlex-7050>
Date: Tue, 2 Apr 2024 12:44:03 +0800
From: Xu Yilun <yilun.xu@...ux.intel.com>
To: matthew.gerlach@...ux.intel.com
Cc: hao.wu@...el.com, trix@...hat.com, mdf@...nel.org, yilun.xu@...el.com,
	linux-fpga@...r.kernel.org, linux-kernel@...r.kernel.org,
	Tim Whisonant <tim.whisonant@...el.com>,
	Ananda Ravuri <ananda.ravuri@...el.com>
Subject: Re: [PATCH] fpga: add DFL driver for CXL Cache IP block

On Fri, Mar 29, 2024 at 08:34:49AM -0700, matthew.gerlach@...ux.intel.com wrote:
> 
> 
> On Tue, 19 Mar 2024, Xu Yilun wrote:
> 
> > On Fri, Mar 08, 2024 at 09:23:27AM -0800, Matthew Gerlach wrote:
> > > From: Tim Whisonant <tim.whisonant@...el.com>
> > > 
> > > Add a Device Feature List (DFL) driver for the
> > > Intel CXL Cache IP block. The driver
> > > provides a means of accessing the device MMIO and the
> > 
> > Why the device MMIO should be accessed by userspace?
> 
> The definition of the MMIO space is actually dependent on the the custom
> logic in the FPGA image. Given the variability of the custom logic
> implemented in the FPGA, it seems burdensome to handle the variability in
> kernel code.
> 
> > 
> > > capability to pin buffers and program their physical
> > > addresses into the HE-Cache registers. User interface
> > 
> > Are these registers also exposed to userspace?
> 
> These registers are currently exposed to user space as well. It probably
> make sense to be consistent and let user space code handle these registers,
> like all custom logic registers, in user space.
> 
> > 
> > And this patch to support a new device/IP, please firstly describe
> > what the device/IP is doing.  I think not everyone(including me) here
> > is familar with CXL standard and how this IP block is implementing CXL
> > functionality.  Some reference documentation is also helpful.
> 
> This is very good feedback. There are actually multiple IP blocks involved
> here. There is a Hard IP block (HIP), and there is a FPGA/soft IP block
> interfacing the HIP, and then there is custom logic implementing a desired
> application.
> 
> > 
> > After a quick skim of this patch, I can see user is trying to write a
> > set of physical addrs to some register, but have no idea why this
> > should be done. i.e. Do not reiterate what the code literally does.
> 
> In this case, the physical addresses are being written to the custom,
> application logic in the FPGA and should be moved to user space.
> 
> If all MMIO to the custom logic is moved to user space, then this driver is
> not really about CXL and hardware. This driver just provides services of
> pinning/unpinning memory with numa node awareness. In this case, the driver
> name is wrong because there is nothing related to CXL nor any specific
> hardware implementation.

If this is not related to any hardware implementation. I'm afraid it is not
just the naming. From the limited knowledge I've got, you just want an
interface in memory management system, not in FPGA/DFL. You may send a
patch to MM people (or VFIO) for review, their opinions are determinative.
But my opinion is the patch is far from being ready to send. Some
questions I can quickly think of, FYI:

1. A justification why a physical address(CPU's perspective) is needed in
   userspace?

2. If question #1 is related to device requirement, why a device needs a
CPU physical address, rather than IOVA.

> 
> > 
> > > is exposed via /dev/dfl-cxl-cache.X as described in
> > > include/uapi/linux/fpga-dfl.h.
> > 
> > And please split the patch, e.g. first add a skeleton of bus driver,
> > then add the skeleton of char interface, then add the functionalities
> > one by one.  This gives chances to clearly describe each function, and
> > why add it, etc.
> 
> If this driver is only implementing a char interface to memory management,
> does it make sense to split the code into separate patches?

In general, it is alway good to split the whole code into small patches.
That makes better understanding. But since it seems to be a different
story now, the most important thing is to clearly tell the use case, what's
the existing solution kernel have, why they are not fit for you? what
are the possible options? why you choose this option? A general word that
application logic demands is not helpful.

Thanks,
Yilun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ