lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 20 Mar 2024 16:18:05 +0000
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@...wei.com>
To: Lu Baolu <baolu.lu@...ux.intel.com>, Jason Gunthorpe <jgg@...pe.ca>,
	"Kevin Tian" <kevin.tian@...el.com>, Joerg Roedel <joro@...tes.org>, Will
 Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>, Jean-Philippe
 Brucker <jean-philippe@...aro.org>, Nicolin Chen <nicolinc@...dia.com>, Yi
 Liu <yi.l.liu@...el.com>, Jacob Pan <jacob.jun.pan@...ux.intel.com>, "Joel
 Granados" <j.granados@...sung.com>
CC: "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
	"virtualization@...ts.linux-foundation.org"
	<virtualization@...ts.linux-foundation.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v3 4/8] iommufd: Add iommufd fault object



> -----Original Message-----
> From: Lu Baolu <baolu.lu@...ux.intel.com>
> Sent: Monday, January 22, 2024 7:39 AM
> To: Jason Gunthorpe <jgg@...pe.ca>; Kevin Tian <kevin.tian@...el.com>; Joerg
> Roedel <joro@...tes.org>; Will Deacon <will@...nel.org>; Robin Murphy
> <robin.murphy@....com>; Jean-Philippe Brucker <jean-philippe@...aro.org>;
> Nicolin Chen <nicolinc@...dia.com>; Yi Liu <yi.l.liu@...el.com>; Jacob Pan
> <jacob.jun.pan@...ux.intel.com>; Joel Granados <j.granados@...sung.com>
> Cc: iommu@...ts.linux.dev; virtualization@...ts.linux-foundation.org; linux-
> kernel@...r.kernel.org; Lu Baolu <baolu.lu@...ux.intel.com>
> Subject: [PATCH v3 4/8] iommufd: Add iommufd fault object
> 
> An iommufd fault object provides an interface for delivering I/O page
> faults to user space. These objects are created and destroyed by user
> space, and they can be associated with or dissociated from hardware page
> table objects during page table allocation or destruction.
> 
> User space interacts with the fault object through a file interface. This
> interface offers a straightforward and efficient way for user space to
> handle page faults. It allows user space to read fault messages
> sequentially and respond to them by writing to the same file. The file
> interface supports reading messages in poll mode, so it's recommended that
> user space applications use io_uring to enhance read and write efficiency.
> 
> A fault object can be associated with any iopf-capable iommufd_hw_pgtable
> during the pgtable's allocation. All I/O page faults triggered by devices
> when accessing the I/O addresses of an iommufd_hw_pgtable are routed
> through the fault object to user space. Similarly, user space's responses
> to these page faults are routed back to the iommu device driver through
> the same fault object.
> 
> Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>

[...]

> +static __poll_t iommufd_fault_fops_poll(struct file *filep,
> +					struct poll_table_struct *wait)
> +{
> +	struct iommufd_fault *fault = filep->private_data;
> +	__poll_t pollflags = 0;
> +
> +	poll_wait(filep, &fault->wait_queue, wait);
> +	mutex_lock(&fault->mutex);
> +	if (!list_empty(&fault->deliver))
> +		pollflags = EPOLLIN | EPOLLRDNORM;
> +	mutex_unlock(&fault->mutex);
> +
> +	return pollflags;
> +}
> +
> +static const struct file_operations iommufd_fault_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= nonseekable_open,
> +	.read		= iommufd_fault_fops_read,
> +	.write		= iommufd_fault_fops_write,
> +	.poll		= iommufd_fault_fops_poll,
> +	.llseek		= no_llseek,
> +};

Hi

I am trying to enable Qemu vSVA support on ARM with this series.
I am using io_uring APIs with the fault fd to handle the page fault
in the Qemu.

Please find the implementation here[1]. This is still a work in progress 
and is based on Nicolin's latest nested Qemu branch.

And I am running into a problem when we have the poll interface added
for the fault fd in kernel.

What I have noticed is that,
-read interface works fine and I can receive struct tiommu_hwpt_pgfault data.
-But once Guest handles the page faults and returns the page response,
 the write to fault fd never reaches the kernel. The sequence is like below,
 
  sqe = io_uring_get_sqe(ring);
  io_uring_prep_write(sqe, hwpt->fault_fd, resp, sizeof(*resp), 0);
  io_uring_sqe_set_data(sqe, resp);
  io_uring_submit(ring);
  ret = io_uring_wait_cqe(ring, &cqe); 
  ....
Please find the function here[2]

The above cqe wait never returns and hardware times out without receiving
page response. My understanding of io_uring default op is that it tries to 
issue an sqe as non-blocking first. But it looks like the above write sequence
ends up in kernel poll_wait() as well.Not sure how we can avoid that for
write.

All works fine if I comment out the poll for the fault_fd from the kernel.
But then of course Qemu ends up repeatedly reading the ring Queue for
any pending page fault.

It might be something I am missing in my understanding of io_uring APIs.
Just thought of checking with you if you have any Qemu implementation
using io_uring APIs to test this.

Also appreciate any pointers in resolving this.

Thanks,
Shameer
[1] https://github.com/hisilicon/qemu/tree/iommufd_vsmmu-02292024-vsva-wip
[2] https://github.com/hisilicon/qemu/blob/2b984fb5c692a03e6f5463d005670d2e2a2c7304/hw/arm/smmuv3.c#L1310


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ