[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGNnWmw4eDsh9hBN@arm.com>
Date: Tue, 16 May 2023 12:22:02 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Petr Tesařík <petr@...arici.cz>
Cc: Petr Tesarik <petrtesarik@...weicloud.com>,
Jonathan Corbet <corbet@....net>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>,
Robin Murphy <robin.murphy@....com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Borislav Petkov <bp@...e.de>,
Randy Dunlap <rdunlap@...radead.org>,
Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
Kim Phillips <kim.phillips@....com>,
"Steven Rostedt (Google)" <rostedt@...dmis.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Hans de Goede <hdegoede@...hat.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Kees Cook <keescook@...omium.org>,
Thomas Gleixner <tglx@...utronix.de>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list:DRM DRIVERS" <dri-devel@...ts.freedesktop.org>,
"open list:DMA MAPPING HELPERS" <iommu@...ts.linux.dev>,
Roberto Sassu <roberto.sassu@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH v2 RESEND 7/7] swiotlb: per-device flag if there are
dynamically allocated buffers
On Tue, May 16, 2023 at 09:55:12AM +0200, Petr Tesařík wrote:
> On Mon, 15 May 2023 17:28:38 +0100
> Catalin Marinas <catalin.marinas@....com> wrote:
> > There is another scenario to take into account on the list_del() side.
> > Let's assume that there are other elements on the list, so
> > list_empty() == false:
> >
> > P0:
> > list_del(paddr);
> > /* the memory gets freed, added to some slab or page free list */
> > WRITE_ONCE(slab_free_list, __va(paddr));
> >
> > P1:
> > paddr = __pa(READ_ONCE(slab_free_list));/* re-allocating paddr freed on P0 */
> > if (!list_empty()) { /* assuming other elements on the list */
> > /* searching the list */
> > list_for_each() {
> > if (pos->paddr) == __pa(vaddr))
> > /* match */
> > }
> > }
> >
> > On P0, you want the list update to be visible before the memory is freed
> > (and potentially reallocated on P1). An smp_wmb() on P0 would do. For
> > P1, we don't care about list_empty() as there can be other elements
> > already. But we do want any list elements reading during the search to
> > be ordered after the slab_free_list reading. The smp_rmb() you'd add for
> > the case above would suffice.
>
> Yes, but to protect against concurrent insertions/deletions, a spinlock
> is held while searching the list. The spin lock provides the necessary
> memory barriers implicitly.
Well, mostly. The spinlock acquire/release semantics ensure that
accesses within the locked region are not observed outside the
lock/unlock. But it doesn't guarantee anything about accesses outside
such region in relation to the accesses within the region. For example:
P0:
spin_lock_irqsave(&swiotlb_dyn_lock);
list_del(paddr);
spin_unlock_irqrestore(&swiotlb_dyn_lock);
/* the blah write below can be observed before list_del() above */
WRITE_ONCE(blah, paddr);
/* that's somewhat tricker but slab_free_list update can also be
* seen before list_del() above on certain architectures */
spin_lock_irqsave(&slab_lock);
WRITE_ONCE(slab_free_list, __va(paddr));
spin_unlock_irqrestore(&slab_lock);
On most architectures, the writing of the pointer to a slab structure
(assuming some spinlocks) would be ordered against the list_del() from
the swiotlb code. Apart from powerpc where the spin_unlock() is not
necessarily ordered against the subsequent spin_lock(). The architecture
selects ARCH_WEAK_RELEASE_ACQUIRE which in turns makes
smp_mb__after_unlock_lock() an smp_mb() (rather than no-op on all the
other architectures).
On arm64 we have smp_mb__after_spinlock() which ensures that memory
accesses prior to spin_lock() are not observed after accesses within the
locked region. I don't think this matters for your case but I thought
I'd mention it.
--
Catalin
Powered by blists - more mailing lists