[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5-v3-76b587fe28df+6e3-iommu_map_gfp_jgg@nvidia.com>
Date: Mon, 23 Jan 2023 16:35:58 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>,
Joerg Roedel <joro@...tes.org>,
Kevin Tian <kevin.tian@...el.com>,
Matthew Rosato <mjrosato@...ux.ibm.com>,
Robin Murphy <robin.murphy@....com>
Cc: Alex Williamson <alex.williamson@...hat.com>,
ath10k@...ts.infradead.org, ath11k@...ts.infradead.org,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
dri-devel@...ts.freedesktop.org, iommu@...ts.linux.dev,
kvm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-arm-msm@...r.kernel.org, linux-media@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-remoteproc@...r.kernel.org,
linux-s390@...r.kernel.org,
linux-stm32@...md-mailman.stormreply.com,
linux-tegra@...r.kernel.org, linux-wireless@...r.kernel.org,
netdev@...r.kernel.org, nouveau@...ts.freedesktop.org,
Niklas Schnelle <schnelle@...ux.ibm.com>,
virtualization@...ts.linux-foundation.org
Subject: [PATCH v3 05/10] iommufd: Use GFP_KERNEL_ACCOUNT for iommu_map()
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain. Many drivers will allocate these at
iommu_map() time and will trivially do the right thing if we pass in
GFP_KERNEL_ACCOUNT.
Reviewed-by: Kevin Tian <kevin.tian@...el.com>
Signed-off-by: Jason Gunthorpe <jgg@...dia.com>
---
drivers/iommu/iommufd/pages.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/iommufd/pages.c b/drivers/iommu/iommufd/pages.c
index 22cc3bb0c6c55a..f8d92c9bb65b60 100644
--- a/drivers/iommu/iommufd/pages.c
+++ b/drivers/iommu/iommufd/pages.c
@@ -457,7 +457,7 @@ static int batch_iommu_map_small(struct iommu_domain *domain,
while (size) {
rc = iommu_map(domain, iova, paddr, PAGE_SIZE, prot,
- GFP_KERNEL);
+ GFP_KERNEL_ACCOUNT);
if (rc)
goto err_unmap;
iova += PAGE_SIZE;
@@ -502,7 +502,7 @@ static int batch_to_domain(struct pfn_batch *batch, struct iommu_domain *domain,
rc = iommu_map(domain, iova,
PFN_PHYS(batch->pfns[cur]) + page_offset,
next_iova - iova, area->iommu_prot,
- GFP_KERNEL);
+ GFP_KERNEL_ACCOUNT);
if (rc)
goto err_unmap;
iova = next_iova;
--
2.39.0
Powered by blists - more mailing lists