[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43ba615f-a6e1-9444-65e1-494169cb415d@deltatee.com>
Date: Thu, 1 Mar 2018 14:57:06 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: Dan Williams <dan.j.williams@...el.com>, benh@....ibm.com
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <keith.busch@...el.com>,
Oliver OHalloran <oliveroh@....ibm.com>,
Alex Williamson <alex.williamson@...hat.com>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-pci@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
Jérôme Glisse <jglisse@...hat.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Max Gurtovoy <maxg@...lanox.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory
On 01/03/18 02:45 PM, Logan Gunthorpe wrote:
> It handles it fine for many situations. But when you try to map
> something that is at the end of the physical address space then the
> spares-vmemmap needs virtual address space that's the size of the
> physical address space divided by PAGE_SIZE which may be a little bit
> too large...
Though, considering this more, maybe this shouldn't be a problem...
Lets say you have 56bits of address space. That's 64PB. If you use need
a sparse vmemmap for the entire space it will take 16TB which leaves you
with 63.98PB of address space left. (Similar calculations for other
numbers of address bits.)
So I'm not sure what the problem with this is.
We still have to ensure all the arches map the memory with the right
cache bits but that should be relatively easy to solve.
Logan
Powered by blists - more mailing lists