lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 6 Jun 2008 14:28:36 -0700
From:	"Grant Grundler" <grundler@...gle.com>
To:	"Muli Ben-Yehuda" <muli@...ibm.com>
Cc:	"FUJITA Tomonori" <fujita.tomonori@....ntt.co.jp>,
	linux-kernel@...r.kernel.org, mgross@...ux.intel.com,
	linux-scsi@...r.kernel.org
Subject: Re: Intel IOMMU (and IOMMU for Virtualization) performances

On Fri, Jun 6, 2008 at 1:21 PM, Muli Ben-Yehuda <muli@...ibm.com> wrote:
....
>> It's possible to split up one flat address space and share the IOMMU
>> among several users. Each user gets her own segment of bitmap and
>> corresponding IO Pdir. So I don't see allocation policy as a strong
>> reason to use Red/Black Tree.
>
> Do you mean multiple users sharing the same I/O address space (but
> each user using a different segment), or multiple users, each with its
> own I/O address space, but only using a specific segment of that
> address space and using a single bitmap to represent free space in all
> segments?

Yes, I meant the former.

> If the former, then you are losing some of the benefit of
> the IOMMU since all users can DMA to other users areas (same I/O
> address space). If the latter, having a bitmap per IO address space
> seems simpler and would have the same memory consumption.


Agreed. It's a trade off.

...
>> I've never been able to come up with a good heuristic for
>> determining the size of the IOVA space. It generally does NOT need
>> to map all of Host Physical RAM.  The actual requirement depends
>> entirely on the workload, type and number of IO devices
>> installed. The problem is we don't know any of those things until
>> well after the IOMMU is already needed.
>
> Why not do what hash-tables implementation do, start small and resize
> when we approach half-full?

Historically the IOMMUs needed physically contiguous memory and
resizing essentially meant quiescing all DMA, moving the IO Pdir data
to the new bigger location,  allocating a new bitmap and cloning the
state into that as well, and then resuming DMA operations. The DMA
quiesce requirement effectively meant a reboot. My understanding of
Vt-d is the ranges can be added range at a time and thus can be easily
resized. But it will mean more complex logic in the IOMMU bitmap
handling for a domain which owns multiple bitmaps and thus a slightly
higher CPU utilization cost. At least that's my guess. I'm not working
on any IOMMU code lately...

thanks,
grant
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ