[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86sf1u3vvd.wl-maz@kernel.org>
Date: Thu, 15 Feb 2024 09:44:06 +0000
From: Marc Zyngier <maz@...nel.org>
To: Oliver Upton <oliver.upton@...ux.dev>
Cc: kvmarm@...ts.linux.dev,
kvm@...r.kernel.org,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 07/23] KVM: arm64: vgic: Use atomics to count LPIs
On Wed, 14 Feb 2024 23:01:04 +0000,
Oliver Upton <oliver.upton@...ux.dev> wrote:
>
> On Wed, Feb 14, 2024 at 08:01:19PM +0000, Marc Zyngier wrote:
> > > > Of course, we only have 3 marks, so that's a bit restrictive from a
> > > > concurrency perspective, but since most callers hold a lock, it should
> > > > be OK.
> > >
> > > They all hold *a* lock, but maybe not the same one! :)
> >
> > Indeed. But as long as there isn't more than 3 locks (and that the
> > xarray is OK being concurrently updated with marks), we're good!
>
> Oh, you mean to give each existing caller their own mark?
Well, each caller "class". Where "class" means "holding look
'foo'". Same lock, same mark. With a maximum of 3 (and I think we can
get away with 2).
> > > Maybe we should serialize the use of markers on the LPI list on the
> > > config_lock. A slight misuse, but we need a mutex since we're poking at
> > > guest memory. Then we can go through the whole N-dimensional locking
> > > puzzle and convince ourselves it is still correct.
> >
> > Maybe. This thing is already seeing so many abuses that one more may
> > not matter much. Need to see how it fits in the whole hierarchy of
> > GIC-related locks...
>
> It doesn't work. We have it that the config_lock needs to be taken
> outside the its_lock.
>
> Too many damn locks!
Well, the joys of emulating highly complex HW with a braindead
programming interface. I'd explore the above suggestion to avoid
introducing a new lock, if at all possible.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists