lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 14 Jul 2021 18:36:37 +0800
From:   John Garry <john.garry@...wei.com>
To:     <joro@...tes.org>, <will@...nel.org>, <robin.murphy@....com>,
        <baolu.lu@...ux.intel.com>
CC:     <iommu@...ts.linux-foundation.org>, <linuxarm@...wei.com>,
        <thierry.reding@...il.com>, <airlied@...ux.ie>, <daniel@...ll.ch>,
        <jonathanh@...dia.com>, <sakari.ailus@...ux.intel.com>,
        <bingbu.cao@...el.com>, <tian.shu.qiu@...el.com>,
        <mchehab@...nel.org>, <gregkh@...uxfoundation.org>,
        <digetx@...il.com>, <mst@...hat.com>, <jasowang@...hat.com>,
        <linux-kernel@...r.kernel.org>, <chenxiang66@...ilicon.com>,
        John Garry <john.garry@...wei.com>
Subject: [PATCH v4 0/6] iommu: Allow IOVA rcache range be configured

For streaming DMA mappings involving an IOMMU and whose IOVA len regularly
exceeds the IOVA rcache upper limit (meaning that they are not cached),
performance can be reduced. 

This may be much more pronounced from commit 4e89dce72521 ("iommu/iova:
Retry from last rb tree node if iova search fails"), as discussed at [0].

IOVAs which cannot be cached are highly involved in the IOVA ageing issue,
as discussed at [1].

This series allows the IOVA rcache range be configured, so that we may
cache all IOVAs per domain, thus improving performance.

A new IOMMU group sysfs file is added - max_opt_dma_size - which is used
indirectly to configure the IOVA rcache range:
/sys/kernel/iommu_groups/X/max_opt_dma_size

This file is updated same as how the IOMMU group default domain type is
updated, i.e. must unbind the only device in the group first.

The inspiration here comes from block layer request queue sysfs
"optimal_io_size" file, in /sys/block/sdX/queue/optimal_io_size

Some figures for storage scenario (when increasing IOVA rcache range to
cover all DMA mapping sizes from the LLD):
v5.13-rc1 baseline:			1200K IOPS
With series:				1800K IOPS

All above are for IOMMU strict mode. Non-strict mode gives ~1800K IOPS in
all scenarios.

[0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/
[1] https://lore.kernel.org/linux-iommu/1607538189-237944-1-git-send-email-john.garry@huawei.com/

Note that I cc'ed maintainers/reviewers only for the changes associated
with patch #5 since it just touches their code in only a minor way.

John Garry (6):
  iommu: Refactor iommu_group_store_type()
  iova: Allow rcache range upper limit to be flexible
  iommu: Allow iommu_change_dev_def_domain() realloc default domain for
    same type
  iommu: Allow max opt DMA len be set for a group via sysfs
  iova: Add iova_len argument to init_iova_domain()
  dma-iommu: Pass iova len for IOVA domain init

 .../ABI/testing/sysfs-kernel-iommu_groups     |  16 ++
 drivers/gpu/drm/tegra/drm.c                   |   2 +-
 drivers/gpu/host1x/dev.c                      |   2 +-
 drivers/iommu/dma-iommu.c                     |  15 +-
 drivers/iommu/iommu.c                         | 172 ++++++++++++------
 drivers/iommu/iova.c                          |  39 +++-
 drivers/staging/media/ipu3/ipu3-dmamap.c      |   2 +-
 drivers/staging/media/tegra-vde/iommu.c       |   2 +-
 drivers/vdpa/vdpa_sim/vdpa_sim.c              |   2 +-
 include/linux/iommu.h                         |   6 +
 include/linux/iova.h                          |   9 +-
 11 files changed, 194 insertions(+), 73 deletions(-)

-- 
2.26.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ