[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170628092205.GB30388@8bytes.org>
Date: Wed, 28 Jun 2017 11:22:05 +0200
From: Joerg Roedel <joro@...tes.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: tglx@...utronix.de, linux-kernel@...r.kernel.org,
iommu@...ts.linux-foundation.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/3] iommu/iova: don't disable preempt around
this_cpu_ptr()
On Tue, Jun 27, 2017 at 06:16:47PM +0200, Sebastian Andrzej Siewior wrote:
> Commit 583248e6620a ("iommu/iova: Disable preemption around use of
> this_cpu_ptr()") disables preemption while accessing a per-CPU variable.
> This does keep lockdep quiet. However I don't see the point why it is
> bad if we get migrated after its access to another CPU.
> __iova_rcache_insert() and __iova_rcache_get() immediately locks the
> variable after obtaining it - before accessing its members.
> _If_ we get migrated away after retrieving the address of cpu_rcache
> before taking the lock then the *other* task on the same CPU will
> retrieve the same address of cpu_rcache and will spin on the lock.
>
> alloc_iova_fast() disables preemption while invoking
> free_cpu_cached_iovas() on each CPU. The function itself uses
> per_cpu_ptr() which does not trigger a warning (like this_cpu_ptr()
> does). It _could_ make sense to use get_online_cpus() instead but the we
> have a hotplug notifier for CPU down (and none for up) so we are good.
Does that really matter? The spin_lock disables irqs and thus avoids
preemption too. We also can't get rid of the irqsave lock here because
these locks are taken in the dma-api path which is used from interrupt
context.
Joerg
Powered by blists - more mailing lists