lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 May 2012 19:20:02 -0600
From:	Bjorn Helgaas <bhelgaas@...gle.com>
To:	Xudong Hao <xudong.hao@...ux.intel.com>
Cc:	linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, avi@...hat.com, alex.williamson@...hat.com,
	xiantao.zhang@...el.com, xudong.hao@...el.com
Subject: Re: [PATCH 1/1] Enable LTR/OBFF before device is used by driver

On Sun, May 13, 2012 at 8:48 PM, Xudong Hao <xudong.hao@...ux.intel.com> wrote:
> Enable LTR(Latency tolerance reporting) and OBFF(optimized buffer flush/fill) in
>  pci_enable_device(), so that they are enabled before the device is used by driver.

Please split this into two patches (one for LTR and another for OBFF)
so they can be reverted individually if they cause trouble.  It would
be nice if you bundled these together with your other "save/restore
max Latency Value" patch so they were all together on the mailing
list.

I read the LTR sections of the PCIe spec, but I can't figure out how
it's supposed to work.  It says "power management policies ... can be
implemented to consider Endpoint service requirements."  Does that
mean there's some OS software that might be involved, or is this just
a matter of software flipping the LTR-enable bit and the hardware
doing everything else?  How confident can we be that enabling this is
safe?

For OBFF, is there some OS piece not included here that tells a Root
Complex that "now is a good time for Endpoints to do something," e.g.,
the spec mentions an "operating system timer tick."  Is there some
benefit to this patch without that piece?  I don't understand the big
picture yet.

> Signed-off-by: Xudong Hao <xudong.hao@...el.com>
>
> ---
>  drivers/pci/pci.c |   29 +++++++++++++++++++++++++++++
>  1 files changed, 29 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 111569c..2369883 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -1134,6 +1134,31 @@ int pci_load_and_free_saved_state(struct pci_dev *dev,
>  }
>  EXPORT_SYMBOL_GPL(pci_load_and_free_saved_state);
>
> +static void pci_enable_dev_caps(struct pci_dev *dev)
> +{
> +       /* set default value */
> +       unsigned long type = PCI_EXP_OBFF_SIGNAL_ALWAYS;

There's only one use of this value, so skip the variable and just use
PCI_EXP_OBFF_SIGNAL_ALWAYS in the call.

The comment at pci_enable_obff() says PCI_OBFF_SIGNAL_L0 is the
preferred type, so please explain why you're not using that.

> +
> +       /* LTR(Latency tolerance reporting) allows devices to send
> +        * messages to the root complex indicating their latency
> +        * tolerance for snooped & unsnooped memory transactions.
> +        */

Follow Linux comment style, i.e.,

    /*
     * LTR ...
     */

> +       pci_enable_ltr(dev);
> +
> +       /* OBFF (optimized buffer flush/fill), where supported,
> +        * can help improve energy efficiency by giving devices
> +        * information about when interrupts and other activity
> +        * will have a reduced power impact.
> +        */
> +       pci_enable_obff(dev, type);
> +}
> +
> +static void pci_disable_dev_caps(struct pci_dev *dev)
> +{
> +       pci_disable_obff(dev);
> +       pci_disable_ltr(dev);
> +}
> +
>  static int do_pci_enable_device(struct pci_dev *dev, int bars)
>  {
>        int err;
> @@ -1146,6 +1171,9 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars)
>                return err;
>        pci_fixup_device(pci_fixup_enable, dev);
>
> +       /* Enable some device capibility before it's used by driver. */
> +       pci_enable_dev_caps(dev);

Why is this here?  It seems similar to what's already in
pci_init_capabilities().  Is there a reason to do this in the
pci_enable_device() path rather than in the pci_device_add() path?

> +
>        return 0;
>  }
>
> @@ -1361,6 +1389,7 @@ static void do_pci_disable_device(struct pci_dev *dev)
>        }
>
>        pcibios_disable_device(dev);
> +       pci_disable_dev_caps(dev);
>  }
>
>  /**
> --
> 1.6.0.rc1
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists