lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160526123459.GA23794@red-moon>
Date:	Thu, 26 May 2016 13:34:59 +0100
From:	Lorenzo Pieralisi <lorenzo.pieralisi@....com>
To:	Duc Dang <dhdang@....com>
Cc:	Marc Zyngier <marc.zyngier@....com>,
	Mark Salter <msalter@...hat.com>,
	Bjorn Helgaas <bhelgaas@...gle.com>, Feng Kan <fkan@....com>,
	linux-pci@...r.kernel.org, patches <patches@....com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Loc Ho <lho@....com>,
	linux-arm <linux-arm-kernel@...ts.infradead.org>,
	Tanmay Inamdar <tinamdar@....com>
Subject: Re: [PATCH] PCI/MSI: pci-xgene-msi: Enable MSI support in ACPI boot
 for X-Gene v1

Hi Duc,

On Wed, May 25, 2016 at 04:13:35PM -0700, Duc Dang wrote:
> On Thu, Feb 25, 2016 at 9:38 AM, Lorenzo Pieralisi
> <lorenzo.pieralisi@....com> wrote:
> > On Wed, Feb 24, 2016 at 02:28:10PM -0800, Duc Dang wrote:
> >> On Wed, Feb 24, 2016 at 8:16 AM, Marc Zyngier <marc.zyngier@....com> wrote:
> >> > On 24/02/16 16:09, Mark Salter wrote:
> >> >> On Tue, 2016-02-09 at 17:56 -0800, Duc Dang wrote:
> >> >>> This patch makes pci-xgene-msi driver ACPI-aware and provides
> >> >>> MSI capability for X-Gene v1 PCIe controllers in ACPI boot mode.
> >> >>>
> >> >>> Signed-off-by: Duc Dang <dhdang@....com>
> >> >>> ---
> >> >>>  drivers/pci/host/pci-xgene-msi.c | 35 ++++++++++++++++++++++++++++++++---
> >> >>>  1 file changed, 32 insertions(+), 3 deletions(-)
> >> >>>
> >> >>> diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c
> >> >>> index a6456b5..466aa93 100644
> >> >>> --- a/drivers/pci/host/pci-xgene-msi.c
> >> >>> +++ b/drivers/pci/host/pci-xgene-msi.c
> >> >>> @@ -24,6 +24,7 @@
> >> >>>  #include <linux/pci.h>
> >> >>>  #include <linux/platform_device.h>
> >> >>>  #include <linux/of_pci.h>
> >> >>> +#include <linux/acpi.h>
> >> >>>
> >> >>>  #define MSI_IR0                     0x000000
> >> >>>  #define MSI_INT0            0x800000
> >> >>> @@ -39,7 +40,7 @@ struct xgene_msi_group {
> >> >>>  };
> >> >>>
> >> >>>  struct xgene_msi {
> >> >>> -    struct device_node      *node;
> >> >>> +    struct fwnode_handle    *fwnode;
> >> >>>      struct irq_domain       *inner_domain;
> >> >>>      struct irq_domain       *msi_domain;
> >> >>>      u64                     msi_addr;
> >> >>> @@ -249,6 +250,13 @@ static const struct irq_domain_ops msi_domain_ops = {
> >> >>>      .free   = xgene_irq_domain_free,
> >> >>>  };
> >> >>>
> >> >>> +#ifdef CONFIG_ACPI
> >> >>> +static struct fwnode_handle *xgene_msi_get_fwnode(struct device *dev)
> >> >>> +{
> >> >>> +    return xgene_msi_ctrl.fwnode;
> >> >>> +}
> >> >>> +#endif
> >> >>> +
> >> >>>  static int xgene_allocate_domains(struct xgene_msi *msi)
> >> >>>  {
> >> >>>      msi->inner_domain = irq_domain_add_linear(NULL, NR_MSI_VEC,
> >> >>> @@ -256,7 +264,7 @@ static int xgene_allocate_domains(struct xgene_msi *msi)
> >> >>>      if (!msi->inner_domain)
> >> >>>              return -ENOMEM;
> >> >>>
> >> >>> -    msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node),
> >> >>> +    msi->msi_domain = pci_msi_create_irq_domain(msi->fwnode,
> >> >>>                                                  &xgene_msi_domain_info,
> >> >>>                                                  msi->inner_domain);
> >> >>
> >> >> This doesn't work for me (ACPI probing on Mustang) unless I change this
> >> >> to be pci_msi_create_default_irq_domain(). The problem seems to be that
> >> >> the MSI probe happens after the PCIe RC is probed so there is no MSI domain
> >> >> at the time the PCIe root is initialized by ACPI.
> >> >
> >> > pci_msi_create_default_irq_domain is the wrong thing do use, specially
> >> > if you have multiple MSI controllers in the system. I certainly wouldn't
> >> > want to see it being used on arm64.
> >> >
> >> > This is the usual dependency hell. You try moving the probing earlier,
> >> > but that may break something else in the process.
> >>
> >> Hi Mark and Marc,
> >>
> >> I have modified Tianocore firmware to have MSI node declared before
> >> PCIe node in Dsdt table. With this modification, the MSI driver will
> >> be loaded before PCIe driver and MSI domain is available at the time
> >> PCIe root is initialized.
> >
> > I am totally against this. We should not hack ACPI tables to make
> > the kernel work (on top of that with unwritten ordering rules that may
> > well require changes as kernel evolves), we should have a standard set
> > of bindings that people use to describe HW in the DSDT and the kernel(s)
> > has to cope with that. If there is a dependency problem in the description
> > we may solve it at bindings level, but I absolutely do not want to rely
> > on DSDT nodes ordering for things to work, that's fragile, no shortcuts
> > please.
> 
> Hi Lorenzo, Bjorn,
> 
> (Refresh this thread on this merge cycle)
> 
> I tried to use _DEP method to describe the dependency of PCIe node to
> MSI node but it turns out that PCIe driver does not check for the
> dependency (only EC (ec.c) and I2C core (i2c-core.c) use
> acpi_walk_dep_device_list to check for dependency before loading
> dependent drivers)
> 
> Do you have other suggestion/example about how to ensure strict
> ordering on driver loading in ACPI boot mode without changing the
> order of ACPI nodes in DSDT?

Before doing anything else, I would like to ask you to report the code
paths leading to failures (what fails actually ?) please, and I want
to understand what "this does not work" means.

In particular:

- Why using pci_msi_create_default_irq_domain() works
- Why, if you change the DSDT nodes ordering, things work (please add
  some debugging to MSI allocation functions to detect the code paths
  leading to correct MSI allocation)
- What's the difference wrt the DT probe path

When we have a detailed view of MSI allocation code paths and failures
we can see what we can do to fix it in ACPI.

Thanks,
Lorenzo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ