[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251024184344.000036f6@huawei.com>
Date: Fri, 24 Oct 2025 18:43:44 +0100
From: Jonathan Cameron <jonathan.cameron@...wei.com>
To: James Morse <james.morse@....com>
CC: <linux-kernel@...r.kernel.org>, <linux-arm-kernel@...ts.infradead.org>,
<linux-acpi@...r.kernel.org>, D Scott Phillips OS
<scott@...amperecomputing.com>, <carl@...amperecomputing.com>,
<lcherian@...vell.com>, <bobo.shaobowang@...wei.com>,
<tan.shaopeng@...itsu.com>, <baolin.wang@...ux.alibaba.com>, Jamie Iles
<quic_jiles@...cinc.com>, Xin Hao <xhao@...ux.alibaba.com>,
<peternewman@...gle.com>, <dfustini@...libre.com>, <amitsinght@...vell.com>,
David Hildenbrand <david@...hat.com>, Dave Martin <dave.martin@....com>, Koba
Ko <kobak@...dia.com>, Shanker Donthineni <sdonthineni@...dia.com>,
<fenghuay@...dia.com>, <baisheng.gao@...soc.com>, Rob Herring
<robh@...nel.org>, Rohit Mathew <rohit.mathew@....com>, "Rafael Wysocki"
<rafael@...nel.org>, Len Brown <lenb@...nel.org>, Lorenzo Pieralisi
<lpieralisi@...nel.org>, Hanjun Guo <guohanjun@...wei.com>, Sudeep Holla
<sudeep.holla@....com>, Catalin Marinas <catalin.marinas@....com>, "Will
Deacon" <will@...nel.org>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Danilo Krummrich <dakr@...nel.org>, Jeremy Linton <jeremy.linton@....com>,
Gavin Shan <gshan@...hat.com>
Subject: Re: [PATCH v3 12/29] arm_mpam: Add helpers for managing the locking
around the mon_sel registers
On Fri, 17 Oct 2025 18:56:28 +0000
James Morse <james.morse@....com> wrote:
> The MSC MON_SEL register needs to be accessed from hardirq for the overflow
> interrupt, and when taking an IPI to access these registers on platforms
> where MSC are not accessible from every CPU. This makes an irqsave
> spinlock the obvious lock to protect these registers. On systems with SCMI
> or PCC mailboxes it must be able to sleep, meaning a mutex must be used.
> The SCMI or PCC platforms can't support an overflow interrupt, and
> can't access the registers from hardirq context.
>
> Clearly these two can't exist for one MSC at the same time.
>
> Add helpers for the MON_SEL locking. For now, use a irqsave spinlock and
> only support 'real' MMIO platforms.
>
> In the future this lock will be split in two allowing SCMI/PCC platforms
> to take a mutex. Because there are contexts where the SCMI/PCC platforms
> can't make an access, mpam_mon_sel_lock() needs to be able to fail. Do
> this now, so that all the error handling on these paths is present. This
> allows the relevant paths to fail if they are needed on a platform where
> this isn't possible, instead of having to make explicit checks of the
> interface type.
>
> Tested-by: Fenghua Yu <fenghuay@...dia.com>
> Signed-off-by: James Morse <james.morse@....com>
> ---
> Change since v1:
> * Made accesses to outer_lock_held READ_ONCE() for torn values in the failure
> case.
Guess that went away. I'd prune the old version log or add something to indicate
it did in a later version log.
One stray change inline otherwise seems fine
Reviewed-by: Jonathan Cameron <jonathan.cameron@...wei.com>
> ---
> drivers/resctrl/mpam_devices.c | 3 ++-
> drivers/resctrl/mpam_internal.h | 38 +++++++++++++++++++++++++++++++++
> 2 files changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/resctrl/mpam_devices.c b/drivers/resctrl/mpam_devices.c
> index 910bb6cd5e4f..35011d3e8f1e 100644
> --- a/drivers/resctrl/mpam_devices.c
> +++ b/drivers/resctrl/mpam_devices.c
> @@ -738,6 +738,7 @@ static struct mpam_msc *do_mpam_msc_drv_probe(struct platform_device *pdev)
>
> mutex_init(&msc->probe_lock);
> mutex_init(&msc->part_sel_lock);
> + mpam_mon_sel_lock_init(msc);
> msc->id = pdev->id;
> msc->pdev = pdev;
> INIT_LIST_HEAD_RCU(&msc->all_msc_list);
> @@ -822,7 +823,7 @@ static void mpam_enable_once(void)
> "mpam:online");
>
> /* Use printk() to avoid the pr_fmt adding the function name. */
> - printk(KERN_INFO, "MPAM enabled with %u PARTIDs and %u PMGs\n",
> + printk(KERN_INFO "MPAM enabled with %u PARTIDs and %u PMGs\n",
Move this to original patch.
> mpam_partid_max + 1, mpam_pmg_max + 1);
> }
Powered by blists - more mailing lists