[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080711152054C.fujita.tomonori@lab.ntt.co.jp>
Date: Fri, 11 Jul 2008 15:20:56 +0900
From: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
To: mpatocka@...hat.com
Cc: sparclinux@...r.kernel.org, linux-kernel@...r.kernel.org,
jens.axboe@...cle.com
Subject: Re: [SUGGESTION]: drop virtual merge accounting in I/O requests
On Thu, 10 Jul 2008 17:56:08 -0400 (EDT)
Mikulas Patocka <mpatocka@...hat.com> wrote:
> When I thought about it more, I realized that this accounting of virtual
> segments in I/O layer can't work correctly at all. If an architecture
> defines symbols BIOVEC_VIRT_MERGEABLE and BIOVEC_VIRT_OVERSIZE, it
> declares that it's IOMMU must merge any two regions satisfying these
> conditions. But in an IOMMU, it is impossible to guarantee, because:
Yeah, IOMMUs can't guarantee that. The majority of architectures set
BIO_VMERGE_BOUNDARY to 0 so they don't hit this, I think.
> * the bus address is allocated basiclly randomly, so we can hit
> dev->dma_parms->segment_boundary_mask any time. This will prevent virtual
> merging from happenning. I/O layer doesn't know the bus address at the
> time it merges requests, so it can't predict when this happens.
>
> * the IOMMU isn't guaranteed to find a continuous space in it's bus
> address space. If it skips over already mapped regions, it can't perform
> virtual merging.
>
> * when creating the mapping, we can hit per-device limit
> "dev->dma_parms->max_segment_size" --- but the I/O layer checks only
> against global limit BIOVEC_VIRT_OVERSIZE. (this last issue is fixable,
> the previous two are not).
I think that the block layer can handle this properly via
q->max_segment_size. We have the same value at two different
places. Yeah, it's not good...
BTW, inia100_template sets sg_tablesize to SG_ALL. If the controller
has at most 32 SG entries per request, we need to fix that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists