lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250110174132.GH396083@nvidia.com>
Date: Fri, 10 Jan 2025 13:41:32 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: kevin.tian@...el.com, corbet@....net, will@...nel.org, joro@...tes.org,
	suravee.suthikulpanit@....com, robin.murphy@....com,
	dwmw2@...radead.org, baolu.lu@...ux.intel.com, shuah@...nel.org,
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-arm-kernel@...ts.infradead.org,
	linux-kselftest@...r.kernel.org, linux-doc@...r.kernel.org,
	eric.auger@...hat.com, jean-philippe@...aro.org, mdf@...nel.org,
	mshavit@...gle.com, shameerali.kolothum.thodi@...wei.com,
	smostafa@...gle.com, ddutile@...hat.com, yi.l.liu@...el.com,
	patches@...ts.linux.dev
Subject: Re: [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event
 helper

On Tue, Jan 07, 2025 at 09:10:11AM -0800, Nicolin Chen wrote:
> +/*
> + * Typically called in driver's threaded IRQ handler.
> + * The @type and @event_data must be defined in include/uapi/linux/iommufd.h
> + */
> +int iommufd_viommu_report_event(struct iommufd_viommu *viommu,
> +				enum iommu_veventq_type type, void *event_data,
> +				size_t data_len)
> +{
> +	struct iommufd_veventq *veventq;
> +	struct iommufd_vevent *vevent;
> +	int rc = 0;
> +
> +	if (!viommu)
> +		return -ENODEV;
> +	if (WARN_ON_ONCE(!viommu->ops || !viommu->ops->supports_veventq ||
> +			 !viommu->ops->supports_veventq(type)))
> +		return -EOPNOTSUPP;
> +	if (WARN_ON_ONCE(!data_len || !event_data))
> +		return -EINVAL;
> +
> +	down_read(&viommu->veventqs_rwsem);
> +
> +	veventq = iommufd_viommu_find_veventq(viommu, type);
> +	if (!veventq) {
> +		rc = -EOPNOTSUPP;
> +		goto out_unlock_veventqs;
> +	}
> +
> +	vevent = kmalloc(struct_size(vevent, event_data, data_len), GFP_KERNEL);
> +	if (!vevent) {
> +		rc = -ENOMEM;
> +		goto out_unlock_veventqs;
> +	}
> +	memcpy(vevent->event_data, event_data, data_len);

The page fault path is self limited because end point devices are only
able to issue a certain number of PRI's before they have to stop.

But the async events generated by something like the SMMU are not self
limiting and we can have a huge barrage of them. I think you need to
add some kind of limiting here otherwise we will OOM the kernel and
crash, eg if the VM spams protection errors.

The virtual event queue should behave the same as if the physical
event queue overflows, and that logic should be in the smmu driver -
this should return some Exxx to indicate the queue is filled.

I supposed we will need a way to indicate lost events to userspace on
top of this?

Presumably userspace should specify the max queue size.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ