lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231005063231.vqwhfnlc64phwmto@vireshk-i7>
Date:   Thu, 5 Oct 2023 12:02:31 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Juergen Gross <jgross@...e.com>
Cc:     Stefano Stabellini <sstabellini@...nel.org>,
        Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Alex Bennée <alex.bennee@...aro.org>,
        stratos-dev@...lists.linaro.org,
        Erik Schilling <erik.schilling@...aro.org>,
        Manos Pitsidianakis <manos.pitsidianakis@...aro.org>,
        Mathieu Poirier <mathieu.poirier@...aro.org>,
        linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Subject: Re: [PATCH 2/2] xen: privcmd: Add support for ioeventfd

On 29-09-23, 07:46, Juergen Gross wrote:
> On 29.08.23 14:29, Viresh Kumar wrote:
> > +static irqreturn_t ioeventfd_interrupt(int irq, void *dev_id)
> > +{
> > +	struct ioreq_port *port = dev_id;
> > +	struct privcmd_kernel_ioreq *kioreq = port->kioreq;
> > +	struct ioreq *ioreq = &kioreq->ioreq[port->vcpu];
> > +	struct privcmd_kernel_ioeventfd *kioeventfd;
> > +	unsigned int state = STATE_IOREQ_READY;
> > +
> > +	if (ioreq->state != STATE_IOREQ_READY ||
> > +	    ioreq->type != IOREQ_TYPE_COPY || ioreq->dir != IOREQ_WRITE)
> > +		return IRQ_NONE;
> > +
> > +	smp_mb();
> > +	ioreq->state = STATE_IOREQ_INPROCESS;
> > +
> > +	mutex_lock(&kioreq->lock);
> > +	list_for_each_entry(kioeventfd, &kioreq->ioeventfds, list) {
> > +		if (ioreq->addr == kioeventfd->addr + VIRTIO_MMIO_QUEUE_NOTIFY &&
> > +		    ioreq->size == kioeventfd->addr_len &&
> > +		    (ioreq->data & QUEUE_NOTIFY_VQ_MASK) == kioeventfd->vq) {
> > +			eventfd_signal(kioeventfd->eventfd, 1);
> > +			state = STATE_IORESP_READY;
> > +			break;
> > +		}
> > +	}
> > +	mutex_unlock(&kioreq->lock);
> > +
> > +	smp_mb();
> 
> Is this really needed after calling mutex_unlock()? I think you are trying to
> avoid any accesses to go past ioreq->state modification. If so, add a comment
> (either why you need the barrier, or that you don't need it due to the unlock).

Right, want all writes to finish before updating state.

> In general, shouldn't the state be checked and modified in the locked area?

The handler runs separately for each vcpu and shouldn't run in parallel for the
same vcpu. And so only one thread should ever be accessing ioreq port structure.

The lock is there to protect the ioeventfds list (as mentioned in struct
declaration) against parallel access, as threads for different vcpus may end up
accessing it simultaneously.

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ