[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130116101822.GA17706@avionic-0098.adnet.avionic-design.de>
Date: Wed, 16 Jan 2013 11:18:22 +0100
From: Thierry Reding <thierry.reding@...onic-design.de>
To: Jason Gunthorpe <jgunthorpe@...idianresearch.com>
Cc: Arnd Bergmann <arnd@...db.de>,
Stephen Warren <swarren@...dotorg.org>,
linux-tegra@...r.kernel.org,
Grant Likely <grant.likely@...retlab.ca>,
Rob Herring <rob.herring@...xeda.com>,
Russell King <linux@....linux.org.uk>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Andrew Murray <andrew.murray@....com>,
Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
devicetree-discuss@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-pci@...r.kernel.org
Subject: Re: [PATCH 05/14] lib: Add I/O map cache implementation
On Thu, Jan 10, 2013 at 12:24:17PM -0700, Jason Gunthorpe wrote:
> On Thu, Jan 10, 2013 at 08:03:27PM +0100, Thierry Reding wrote:
>
> > > > You'd piece a mapping together, each bus requires 16 64k mappings, a
> > > > simple 2d array of busnr*16 of pointers would do the trick. A more
> > > > clever solution would be to allocate contiguous virtual memory and
> > > > split that up..
>
> > > Oh, I see. I'm not very familiar with the internals of remapping, so
> > > I'll need to do some more reading. Thanks for the hints.
> >
> > I forgot to ask. What's the advantage of having a contiguous virtual
> > memory area and splitting it up versus remapping each chunk separately?
>
> Not alot, really, but it saves you from the pointer array and
> associated overhead. IIRC it is fairly easy to do in the kernel.
>
> Arnd's version is good too, but you would be restricted to aligned
> powers of two for the bus number range in the DT, which is probably
> not that big a deal either?
I've been trying to make this work, but this implementation always
triggers a BUG_ON() in lib/ioremap.c, line 27:
27 BUG_ON(!pte_none(*pte));
which seems to indicate that the page is already mapped, right?
Below is the relevant code:
struct tegra_pcie_bus {
struct vm_struct *area;
struct list_head list;
unsigned int nr;
};
static struct tegra_pcie_bus *tegra_pcie_bus_alloc(struct tegra_pcie *pcie,
unsigned int busnr)
{
unsigned long flags = VM_READ | VM_WRITE | VM_IO | VM_PFNMAP |
VM_DONTEXPAND | VM_DONTDUMP;
phys_addr_t cs = pcie->cs->start;
struct tegra_pcie_bus *bus;
struct vm_struct *vm;
unsigned int i;
int err;
bus = devm_kzalloc(pcie->dev, sizeof(*bus), GFP_KERNEL);
if (!bus)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&bus->list);
bus->nr = busnr;
bus->area = get_vm_area(SZ_1M, VM_IOREMAP);
if (!bus->area) {
err = -ENOMEM;
goto free;
}
for (i = 0; i < 16; i++) {
unsigned long virt = (unsigned long)bus->area->addr +
i * SZ_64K;
phys_addr_t phys = cs + busnr * SZ_64K + i * SZ_1M;
err = ioremap_page_range(virt, virt + SZ_64K - 1, phys,
vm_get_page_prot(flags));
if (err < 0) {
dev_err(pcie->dev, "ioremap_page_range() failed: %d\n",
err);
goto unmap;
}
}
return bus;
unmap:
vunmap(bus->area->addr);
free_vm_area(bus->area);
free:
devm_kfree(pcie->dev, bus);
return ERR_PTR(err);
}
Anybody see what's wrong with that?
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists