[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231215174343.13872-10-james.morse@arm.com>
Date: Fri, 15 Dec 2023 17:43:28 +0000
From: James Morse <james.morse@....com>
To: x86@...nel.org,
linux-kernel@...r.kernel.org
Cc: Fenghua Yu <fenghua.yu@...el.com>,
Reinette Chatre <reinette.chatre@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Babu Moger <Babu.Moger@....com>,
James Morse <james.morse@....com>,
shameerali.kolothum.thodi@...wei.com,
D Scott Phillips OS <scott@...amperecomputing.com>,
carl@...amperecomputing.com,
lcherian@...vell.com,
bobo.shaobowang@...wei.com,
tan.shaopeng@...itsu.com,
baolin.wang@...ux.alibaba.com,
Jamie Iles <quic_jiles@...cinc.com>,
Xin Hao <xhao@...ux.alibaba.com>,
peternewman@...gle.com,
dfustini@...libre.com,
amitsinght@...vell.com,
Babu Moger <babu.moger@....com>
Subject: [PATCH v8 09/24] x86/resctrl: Use __set_bit()/__clear_bit() instead of open coding
The resctrl CLOSID allocator uses a single 32bit word to track which
CLOSID are free. The setting and clearing of bits is open coded.
Convert the existing open coded bit manipulations of closid_free_map
to use __set_bit() and friends. These don't need to be atomic as this
list is protected by the mutex.
Signed-off-by: James Morse <james.morse@....com>
Tested-by: Shaopeng Tan <tan.shaopeng@...itsu.com>
Tested-by: Peter Newman <peternewman@...gle.com>
Tested-by: Babu Moger <babu.moger@....com>
Reviewed-by: Shaopeng Tan <tan.shaopeng@...itsu.com>
Reviewed-by: Reinette Chatre <reinette.chatre@...el.com>
Reviewed-by: Babu Moger <babu.moger@....com>
---
Changes since v6:
* Use the __ inatomic helpers and add lockdep_assert_held() annotations to
document how this is safe.
* Fixed a resctrl_closid_is_free()/closid_allocated() rename in the commit
message.
* Use RESCTRL_RESERVED_CLOSID to improve readability.
Changes since v7:
* Removed paragraph explaining why this should be done now due to badword
'subsequent'.
* Changed a comment to refer to RESCTRL_RESERVED_CLOSID.
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 12a557c96100..f6b52415ca3d 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -111,7 +111,7 @@ void rdt_staged_configs_clear(void)
* - Our choices on how to configure each resource become progressively more
* limited as the number of resources grows.
*/
-static int closid_free_map;
+static unsigned long closid_free_map;
static int closid_free_map_len;
int closids_supported(void)
@@ -130,8 +130,8 @@ static void closid_init(void)
closid_free_map = BIT_MASK(rdt_min_closid) - 1;
- /* CLOSID 0 is always reserved for the default group */
- closid_free_map &= ~1;
+ /* RESCTRL_RESERVED_CLOSID is always reserved for the default group */
+ __clear_bit(RESCTRL_RESERVED_CLOSID, &closid_free_map);
closid_free_map_len = rdt_min_closid;
}
@@ -139,17 +139,21 @@ static int closid_alloc(void)
{
u32 closid = ffs(closid_free_map);
+ lockdep_assert_held(&rdtgroup_mutex);
+
if (closid == 0)
return -ENOSPC;
closid--;
- closid_free_map &= ~(1 << closid);
+ __clear_bit(closid, &closid_free_map);
return closid;
}
void closid_free(int closid)
{
- closid_free_map |= 1 << closid;
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ __set_bit(closid, &closid_free_map);
}
/**
@@ -161,7 +165,9 @@ void closid_free(int closid)
*/
static bool closid_allocated(unsigned int closid)
{
- return (closid_free_map & (1 << closid)) == 0;
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ return !test_bit(closid, &closid_free_map);
}
/**
--
2.20.1
Powered by blists - more mailing lists