[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <65d655b8a098d_5c76294ac@dwillia2-mobl3.amr.corp.intel.com.notmuch>
Date: Wed, 21 Feb 2024 11:57:44 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: Ira Weiny <ira.weiny@...el.com>, Dan Williams <dan.j.williams@...el.com>,
"Rafael J. Wysocki" <rafael@...nel.org>, Jonathan Cameron
<jonathan.cameron@...wei.com>, Smita Koralahalli
<Smita.KoralahalliChannabasappa@....com>
CC: <linux-acpi@...r.kernel.org>, <linux-cxl@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, Dan Carpenter <dan.carpenter@...aro.org>,
"Ira Weiny" <ira.weiny@...el.com>
Subject: RE: [PATCH v2] acpi/ghes: Prevent sleeping with spinlock held
Ira Weiny wrote:
> Dan Williams wrote:
> > Ira Weiny wrote:
>
> [snip]
>
> > >
> > > - guard(rwsem_read)(&cxl_cper_rw_sem);
> > > - if (cper_callback)
> > > - cper_callback(event_type, rec);
> >
> > Given a work function can be set atomically there is no need to create /
> > manage a registration lock. Set a 'struct work' instance to a CXL
> > provided routine on cxl_pci module load and restore it to a nop function
> > + cancel_work_sync() on cxl_pci module exit.
>
> Ok I'll look into this.
>
> >
> > > + wi = kmalloc(sizeof(*wi), GFP_ATOMIC);
> >
> > The system is already under distress trying to report an error it should
> > not dip into emergency memory reserves to report errors. Use a kfifo()
> > similar to how memory_failure_queue() avoids memory allocation in the
> > error reporting path.
>
> I have a question on ghes_proc() [ghes_do_proc()]. Can they be called by
> 2 threads at the same time? It seems like there could be multiple
> platform devices which end up queueing into the single kfifo.
Yes, that is already the case for memory_failure_queue() and
aer_recover_queue().
> there needs to be a kfifo per device or synchronization with multiple
> writers.
Yes, follow the other _queue() examples. kfifo_in_spinlocked() looks
useful for this purpose.
I expect no lock needed on the read side since the reader is only the
single workqueue context.
Powered by blists - more mailing lists