lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Mar 2018 14:31:23 -0800
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Benjamin Herrenschmidt <benh@....ibm.com>
Cc:     Jason Gunthorpe <jgg@...pe.ca>,
        Dan Williams <dan.j.williams@...el.com>,
        Logan Gunthorpe <logang@...tatee.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-pci@...r.kernel.org,
        linux-nvme <linux-nvme@...ts.infradead.org>,
        linux-rdma <linux-rdma@...r.kernel.org>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        linux-block <linux-block@...r.kernel.org>,
        Stephen Bates <sbates@...thlin.com>,
        Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>,
        Keith Busch <keith.busch@...el.com>,
        Sagi Grimberg <sagi@...mberg.me>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Max Gurtovoy <maxg@...lanox.com>,
        Jérôme Glisse <jglisse@...hat.com>,
        Alex Williamson <alex.williamson@...hat.com>,
        Oliver OHalloran <oliveroh@....ibm.com>
Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory

On Thu, Mar 1, 2018 at 2:06 PM, Benjamin Herrenschmidt <benh@....ibm.com> wrote:
>
> Could be that x86 has the smarts to do the right thing, still trying to
> untangle the code :-)

Afaik, x86 will not cache PCI unless the system is misconfigured, and
even then it's more likely to just raise a machine check exception
than cache things.

The last-level cache is going to do fills and spills directly to the
memory controller, not to the PCIe side of things.

(I guess you *can* do things differently, and I wouldn't be surprised
if some people inside Intel did try to do things differently with
trying nvram over PCIe, but in general I think the above is true)

You won't find it in the kernel code either. It's in hardware with
firmware configuration of what addresses are mapped to the memory
controllers (and _how_ they are mapped) and which are not.

You _might_ find it in the BIOS, assuming you understood the tables
and had the BIOS writer's guide to unravel the magic registers.

But you might not even find it there. Some of the memory unit timing
programming is done very early, and by code that Intel doesn't even
release to the BIOS writers except as a magic encrypted blob, afaik.
Some of the magic might even be in microcode.

The page table settings for cacheability are more like a hint, and
only _part_ of the whole picture. The memory type range registers are
another part. And magic low-level uarch, northbridge and memory unit
specific magic is yet another part.

So you can disable caching for memory, but I'm pretty sure you can't
enable caching for PCIe at least in the common case. At best you can
affect how the store buffer works for PCIe.

              Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ