[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877cnwqg27.fsf@linaro.org>
Date: Mon, 09 Oct 2023 10:40:48 +0100
From: Alex Bennée <alex.bennee@...aro.org>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Juergen Gross <jgross@...e.com>,
Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
stratos-dev@...lists.linaro.org,
Erik Schilling <erik.schilling@...aro.org>,
Manos Pitsidianakis <manos.pitsidianakis@...aro.org>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Subject: Re: [PATCH 2/2] xen: privcmd: Add support for ioeventfd
Viresh Kumar <viresh.kumar@...aro.org> writes:
> On 29-09-23, 07:46, Juergen Gross wrote:
>> On 29.08.23 14:29, Viresh Kumar wrote:
>> > +static irqreturn_t ioeventfd_interrupt(int irq, void *dev_id)
>> > +{
>> > + struct ioreq_port *port = dev_id;
>> > + struct privcmd_kernel_ioreq *kioreq = port->kioreq;
>> > + struct ioreq *ioreq = &kioreq->ioreq[port->vcpu];
>> > + struct privcmd_kernel_ioeventfd *kioeventfd;
>> > + unsigned int state = STATE_IOREQ_READY;
>> > +
>> > + if (ioreq->state != STATE_IOREQ_READY ||
>> > + ioreq->type != IOREQ_TYPE_COPY || ioreq->dir != IOREQ_WRITE)
>> > + return IRQ_NONE;
>> > +
>> > + smp_mb();
>> > + ioreq->state = STATE_IOREQ_INPROCESS;
>> > +
>> > + mutex_lock(&kioreq->lock);
>> > + list_for_each_entry(kioeventfd, &kioreq->ioeventfds, list) {
>> > + if (ioreq->addr == kioeventfd->addr + VIRTIO_MMIO_QUEUE_NOTIFY &&
>> > + ioreq->size == kioeventfd->addr_len &&
>> > + (ioreq->data & QUEUE_NOTIFY_VQ_MASK) == kioeventfd->vq) {
>> > + eventfd_signal(kioeventfd->eventfd, 1);
>> > + state = STATE_IORESP_READY;
>> > + break;
>> > + }
>> > + }
>> > + mutex_unlock(&kioreq->lock);
>> > +
>> > + smp_mb();
>>
>> Is this really needed after calling mutex_unlock()? I think you are trying to
>> avoid any accesses to go past ioreq->state modification. If so, add a comment
>> (either why you need the barrier, or that you don't need it due to the unlock).
>
> Right, want all writes to finish before updating state.
I thought generally sync points act as full barriers. Doing a bunch of
grepping I think ends at:
static __always_inline bool __mutex_unlock_fast(struct mutex *lock)
{
unsigned long curr = (unsigned long)current;
return atomic_long_try_cmpxchg_release(&lock->owner, &curr, 0UL);
}
so you should already have completed your writes by that point.
>
>> In general, shouldn't the state be checked and modified in the locked area?
>
> The handler runs separately for each vcpu and shouldn't run in parallel for the
> same vcpu. And so only one thread should ever be accessing ioreq port structure.
>
> The lock is there to protect the ioeventfds list (as mentioned in struct
> declaration) against parallel access, as threads for different vcpus may end up
> accessing it simultaneously.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Powered by blists - more mailing lists