lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a1f6e2d9-8b7d-10ae-5963-50b447cacb44@deltatee.com>
Date:   Fri, 31 Aug 2018 10:26:08 -0600
From:   Logan Gunthorpe <logang@...tatee.com>
To:     Jonathan Cameron <jonathan.cameron@...wei.com>
Cc:     linux-kernel@...r.kernel.org, linux-pci@...r.kernel.org,
        linux-nvme@...ts.infradead.org, linux-rdma@...r.kernel.org,
        linux-nvdimm@...ts.01.org, linux-block@...r.kernel.org,
        Stephen Bates <sbates@...thlin.com>,
        Christoph Hellwig <hch@....de>,
        Keith Busch <keith.busch@...el.com>,
        Sagi Grimberg <sagi@...mberg.me>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Jason Gunthorpe <jgg@...lanox.com>,
        Max Gurtovoy <maxg@...lanox.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Jérôme Glisse <jglisse@...hat.com>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Alex Williamson <alex.williamson@...hat.com>,
        Christian König <christian.koenig@....com>
Subject: Re: [PATCH v5 01/13] PCI/P2PDMA: Support peer-to-peer memory



On 31/08/18 10:19 AM, Jonathan Cameron wrote:
> This feels like a somewhat simplistic starting point rather than a
> generally correct estimate to use.  Should we be taking the bandwidth of
> those links into account for example, or any discoverable latencies?
> Not all PCIe switches are alike - particularly when it comes to P2P.

I don't think this is necessary. There won't typically be a ton of
choice in terms of devices to use and if there is, the hardware will
probably be fairly homogenous. For example, it would be unusual to have
an NVMe drive on a x4 and another one on an x8. Or mixing say Gen3
switches with Gen4 would also be very strange. In weird unusual cases
like this where the user specifically wants to use a faster device they
can specify the specific device in the configfs interface.

I think the latency would probably be proportional to the distance which
is what we are already using.

> I guess that can be a topic for future development if it turns out people
> have horrible mixed systems.

Yup!

Logan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ