[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250110174842.GI396083@nvidia.com>
Date: Fri, 10 Jan 2025 13:48:42 -0400
From: Jason Gunthorpe <jgg@...dia.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: kevin.tian@...el.com, corbet@....net, will@...nel.org, joro@...tes.org,
suravee.suthikulpanit@....com, robin.murphy@....com,
dwmw2@...radead.org, baolu.lu@...ux.intel.com, shuah@...nel.org,
linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org,
linux-kselftest@...r.kernel.org, linux-doc@...r.kernel.org,
eric.auger@...hat.com, jean-philippe@...aro.org, mdf@...nel.org,
mshavit@...gle.com, shameerali.kolothum.thodi@...wei.com,
smostafa@...gle.com, ddutile@...hat.com, yi.l.liu@...el.com,
patches@...ts.linux.dev
Subject: Re: [PATCH v5 06/14] iommufd: Add IOMMUFD_OBJ_VEVENTQ and
IOMMUFD_CMD_VEVENTQ_ALLOC
On Tue, Jan 07, 2025 at 09:10:09AM -0800, Nicolin Chen wrote:
> +static ssize_t iommufd_veventq_fops_read(struct iommufd_eventq *eventq,
> + char __user *buf, size_t count,
> + loff_t *ppos)
> +{
> + size_t done = 0;
> + int rc = 0;
> +
> + if (*ppos)
> + return -ESPIPE;
> +
> + mutex_lock(&eventq->mutex);
> + while (!list_empty(&eventq->deliver) && count > done) {
> + struct iommufd_vevent *cur = list_first_entry(
> + &eventq->deliver, struct iommufd_vevent, node);
> +
> + if (cur->data_len > count - done)
> + break;
> +
> + if (copy_to_user(buf + done, cur->event_data, cur->data_len)) {
> + rc = -EFAULT;
> + break;
> + }
Now that I look at this more closely, the fault path this is copied
from is not great.
This copy_to_user() can block while waiting on a page fault, possibily
for a long time. While blocked the mutex is held and we can't add more
entries to the list.
That will cause the shared IRQ handler in the iommu driver to back up,
which would cause a global DOS.
This probably wants to be organized to look more like
while (itm = eventq_get_next_item(eventq)) {
if (..) {
eventq_restore_failed_item(eventq);
return -1;
}
}
Where the next_item would just be a simple spinlock across the linked
list manipulation.
Jason
Powered by blists - more mailing lists