lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240426034323.417219-1-pasha.tatashin@soleen.com>
Date: Fri, 26 Apr 2024 03:43:20 +0000
From: Pasha Tatashin <pasha.tatashin@...een.com>
To: akpm@...ux-foundation.org,
	linux-mm@...ck.org,
	pasha.tatashin@...een.com,
	linux-kernel@...r.kernel.org,
	rientjes@...gle.com,
	dwmw2@...radead.org,
	baolu.lu@...ux.intel.com,
	joro@...tes.org,
	will@...nel.org,
	robin.murphy@....com,
	iommu@...ts.linux.dev
Subject: [RFC v2 0/3] iommu/intel: Free empty page tables on unmaps

Changelog
================================================================
v2: Use mapcount instead of refcount
    Synchronized with IOMMU Observability changes.
================================================================

This series frees empty page tables on unmaps. It intends to be a
low overhead feature.

The read-writer lock is used to synchronize page table, but most of
time the lock is held is reader. It is held as a writer for short
period of time when unmapping a page that is bigger than the current
iova request. For all other cases this lock is read-only.

page->mapcount is used in order to track number of entries at each page
table.

Microbenchmark data using iova_stress[1]:

Base:
$ ./iova_stress -s 16
dma_size:       4K iova space: 16T iommu: ~  32847M time:   36.074s

Fix:
$ ./iova_stress -s 16
dma_size:       4K iova space: 16T iommu: ~     27M time:   38.870s

The test maps/unmaps 4K pages and cycles through the IOVA space in a tight loop.
Base uses 32G of memory, and test completes in 36.074s
Fix uses 0G of memory, and test completes in 38.870s.

I believe the proposed fix is a good compromise in terms of complexity/
scalability. A more scalable solution would be to spread read/writer
lock per-page table, and user page->private field to store the lock
itself.

However, since iommu already has some protection: i.e. no-one touches
the iova space of the request map/unmap we can avoid the extra complexity
and rely on a single per page table RW lock, and be in a reader mode
most of the time.

[1] https://github.com/soleen/iova_stress

Pasha Tatashin (3):
  iommu/intel: Use page->_mapcount to count number of entries in IOMMU
  iommu/intel: synchronize page table map and unmap operations
  iommu/intel: free empty page tables on unmaps

 drivers/iommu/intel/iommu.c | 154 ++++++++++++++++++++++++++++--------
 drivers/iommu/intel/iommu.h |  42 ++++++++--
 drivers/iommu/iommu-pages.h |  30 +++++--
 3 files changed, 180 insertions(+), 46 deletions(-)

-- 
2.44.0.769.g3c40516874-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ