lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Jul 2020 09:09:35 +0000
From:   "Wu, Hao" <hao.wu@...el.com>
To:     "Xu, Yilun" <yilun.xu@...el.com>,
        "mdf@...nel.org" <mdf@...nel.org>,
        "linux-fpga@...r.kernel.org" <linux-fpga@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     "trix@...hat.com" <trix@...hat.com>,
        "lgoncalv@...hat.com" <lgoncalv@...hat.com>,
        "Xu, Yilun" <yilun.xu@...el.com>,
        Matthew Gerlach <matthew.gerlach@...ux.intel.com>,
        "Weight, Russell H" <russell.h.weight@...el.com>
Subject: RE: [PATCH 1/2] fpga: dfl: map feature mmio resources in their own
 feature drivers

> Subject: [PATCH 1/2] fpga: dfl: map feature mmio resources in their own
> feature drivers
> 
> This patch makes preparation for modularization of DFL sub feature
> drivers.
> 
> Currently, if we need to support a new DFL sub feature, an entry should
> be added to fme/port_feature_drvs[] in dfl-fme/port-main.c. And we need
> to re-compile the whole DFL modules. That make the DFL drivers hard to be
> extended.
> 
> Another consideration is that DFL may contain some IP blocks which are
> already supported by kernel, most of them are supported by platform
> device drivers. We could create platform devices for these IP blocks and
> get them supported by these drivers.
> 
> An important issue is that platform device drivers usually requests mmio
> resources on probe. But now dfl mmio is mapped in dfl bus driver (e.g.
> dfl-pci) as a whole region. Then platform device drivers for sub features
> can't request their own mmio resources again. This is what the patch
> trying to resolve.
> 
> This patch changes the DFL enumeration. DFL bus driver will unmap mmio
> resources after first step enumeration and pass enumeration info to DFL
> framework. Then DFL framework will map the mmio resources again, do 2nd
> step enumeration, and also unmap the mmio resources. In this way, sub
> feature drivers could then request their own mmio resources as needed.
> 
> An exception is that mmio resource of FIU headers are still mapped in dfl
> bus driver. The FIU headers have some fundamental functions (sriov set,
> port enable/disable) needed for dfl bus devices and other sub features.
> They should not be unmapped as long as dfl bus device is alive.
> 
> Signed-off-by: Xu Yilun <yilun.xu@...el.com>
> Signed-off-by: Wu Hao <hao.wu@...el.com>
> Signed-off-by: Matthew Gerlach <matthew.gerlach@...ux.intel.com>
> Signed-off-by: Russ Weight <russell.h.weight@...el.com>
> ---
>  drivers/fpga/dfl-pci.c |  21 ++++--
>  drivers/fpga/dfl.c     | 187 +++++++++++++++++++++++++++++++++++-----------
> ---
>  drivers/fpga/dfl.h     |   6 +-
>  3 files changed, 152 insertions(+), 62 deletions(-)
> 
> diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
> index e220bec..22dc025 100644
> --- a/drivers/fpga/dfl-pci.c
> +++ b/drivers/fpga/dfl-pci.c
> @@ -39,6 +39,11 @@ static void __iomem *cci_pci_ioremap_bar(struct
> pci_dev *pcidev, int bar)
>  	return pcim_iomap_table(pcidev)[bar];
>  }
> 
> +static void cci_pci_iounmap_bars(struct pci_dev *pcidev, int mapped_bars)
> +{
> +	pcim_iounmap_regions(pcidev, mapped_bars);
> +}
> +
>  static int cci_pci_alloc_irq(struct pci_dev *pcidev)
>  {
>  	int ret, nvec = pci_msix_vec_count(pcidev);
> @@ -123,7 +128,7 @@ static int *cci_pci_create_irq_table(struct pci_dev
> *pcidev, unsigned int nvec)
>  static int cci_enumerate_feature_devs(struct pci_dev *pcidev)
>  {
>  	struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);
> -	int port_num, bar, i, nvec, ret = 0;
> +	int port_num, bar, i, nvec, mapped_bars, ret = 0;
>  	struct dfl_fpga_enum_info *info;
>  	struct dfl_fpga_cdev *cdev;
>  	resource_size_t start, len;
> @@ -163,6 +168,8 @@ static int cci_enumerate_feature_devs(struct pci_dev
> *pcidev)
>  		goto irq_free_exit;
>  	}
> 
> +	mapped_bars = BIT(0);
> +
>  	/*
>  	 * PF device has FME and Ports/AFUs, and VF device only has one
>  	 * Port/AFU. Check them and add related "Device Feature List" info
> @@ -172,7 +179,7 @@ static int cci_enumerate_feature_devs(struct pci_dev
> *pcidev)
>  		start = pci_resource_start(pcidev, 0);
>  		len = pci_resource_len(pcidev, 0);
> 
> -		dfl_fpga_enum_info_add_dfl(info, start, len, base);
> +		dfl_fpga_enum_info_add_dfl(info, start, len);
> 
>  		/*
>  		 * find more Device Feature Lists (e.g. Ports) per information
> @@ -200,22 +207,26 @@ static int cci_enumerate_feature_devs(struct
> pci_dev *pcidev)
>  			if (!base)
>  				continue;
> 
> +			mapped_bars |= BIT(bar);
> +

One more place,

As you removed base addr usage below, so ideally we don't need to map
more bars for port here? is my understanding correct?

>  			start = pci_resource_start(pcidev, bar) + offset;
>  			len = pci_resource_len(pcidev, bar) - offset;
> 
> -			dfl_fpga_enum_info_add_dfl(info, start, len,
> -						   base + offset);
> +			dfl_fpga_enum_info_add_dfl(info, start, len);

Thanks
Hao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ