[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <296ce13a-cc00-4853-b5ee-a1e09215bed5@amd.com>
Date: Thu, 26 Sep 2024 13:48:39 -0500
From: "Moger, Babu" <babu.moger@....com>
To: Reinette Chatre <reinette.chatre@...el.com>, corbet@....net,
fenghua.yu@...el.com, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com
Cc: x86@...nel.org, hpa@...or.com, paulmck@...nel.org, rdunlap@...radead.org,
tj@...nel.org, peterz@...radead.org, yanjiewtw@...il.com,
kim.phillips@....com, lukas.bulwahn@...il.com, seanjc@...gle.com,
jmattson@...gle.com, leitao@...ian.org, jpoimboe@...nel.org,
rick.p.edgecombe@...el.com, kirill.shutemov@...ux.intel.com,
jithu.joseph@...el.com, kai.huang@...el.com, kan.liang@...ux.intel.com,
daniel.sneddon@...ux.intel.com, pbonzini@...hat.com, sandipan.das@....com,
ilpo.jarvinen@...ux.intel.com, peternewman@...gle.com,
maciej.wieczor-retman@...el.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, eranian@...gle.com, james.morse@....com
Subject: Re: [PATCH v7 18/24] x86/resctrl: Auto Assign/unassign counters when
mbm_cntr_assign is enabled
Hi Reinette,
On 9/19/24 12:29, Reinette Chatre wrote:
> Hi Babu,
>
> Subject: "Assign" -> "assign"
Sure.
>
> On 9/4/24 3:21 PM, Babu Moger wrote:
>
>> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 61 ++++++++++++++++++++++++++
>> 1 file changed, 61 insertions(+)
>>
>> diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> index 21b9ca4ce493..bf94e4e05540 100644
>> --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
>> @@ -2866,6 +2866,52 @@ static void schemata_list_destroy(void)
>> }
>> }
>>
>> +/*
>> + * Called when a new group is created. If `mbm_cntr_assign` mode is enabled,
>> + * counters are automatically assigned. Each group requires two counters:
>> + * one for the total event and one for the local event. Due to the limited
>> + * number of counters, assignments may fail in some cases. However, it is
>> + * not necessary to fail the group creation. Users have the option to
>> + * modify the assignments after the group has been created.
>> + */
>> +static int rdtgroup_assign_grp(struct rdtgroup *rdtgrp)
>> +{
>> + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>> + int ret = 0;
>> +
>> + if (!resctrl_arch_mbm_cntr_assign_enabled(r))
>> + return 0;
>> +
>> + if (is_mbm_total_enabled())
>> + ret = rdtgroup_assign_cntr(r, rdtgrp, NULL, QOS_L3_MBM_TOTAL_EVENT_ID);
>> +
>> + if (!ret && is_mbm_local_enabled())
>> + ret = rdtgroup_assign_cntr(r, rdtgrp, NULL, QOS_L3_MBM_LOCAL_EVENT_ID);
>> +
>> + return ret;
>> +}
>> +
>> +/*
>> + * Called when a group is deleted. Counters are unassigned if it was in
>> + * assigned state.
>> + */
>> +static int rdtgroup_unassign_grp(struct rdtgroup *rdtgrp)
>> +{
>> + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>> + int ret = 0;
>> +
>> + if (!resctrl_arch_mbm_cntr_assign_enabled(r))
>> + return 0;
>> +
>> + if (is_mbm_total_enabled())
>> + ret = rdtgroup_unassign_cntr(r, rdtgrp, NULL, QOS_L3_MBM_TOTAL_EVENT_ID);
>> +
>> + if (!ret && is_mbm_local_enabled())
>> + ret = rdtgroup_unassign_cntr(r, rdtgrp, NULL, QOS_L3_MBM_LOCAL_EVENT_ID);
>> +
>> + return ret;
>> +}
>> +
>> static int rdt_get_tree(struct fs_context *fc)
>> {
>> struct rdt_fs_context *ctx = rdt_fc2context(fc);
>> @@ -2925,6 +2971,8 @@ static int rdt_get_tree(struct fs_context *fc)
>> if (ret < 0)
>> goto out_mongrp;
>> rdtgroup_default.mon.mon_data_kn = kn_mondata;
>> +
>> + rdtgroup_assign_grp(&rdtgroup_default);
>> }
>>
>> ret = rdt_pseudo_lock_init();
>> @@ -2955,6 +3003,7 @@ static int rdt_get_tree(struct fs_context *fc)
>> out_psl:
>> rdt_pseudo_lock_release();
>> out_mondata:
>> + rdtgroup_unassign_grp(&rdtgroup_default);
>> if (resctrl_arch_mon_capable())
>> kernfs_remove(kn_mondata);
>> out_mongrp:
>> @@ -3214,6 +3263,8 @@ static void rdt_kill_sb(struct super_block *sb)
>> resctrl_arch_disable_alloc();
>> if (resctrl_arch_mon_capable())
>> resctrl_arch_disable_mon();
>> +
>> + rdtgroup_unassign_grp(&rdtgroup_default);
>> resctrl_mounted = false;
>> kernfs_kill_sb(sb);
>> mutex_unlock(&rdtgroup_mutex);
>> @@ -3805,6 +3856,8 @@ static int rdtgroup_mkdir_mon(struct kernfs_node *parent_kn,
>> goto out_unlock;
>> }
>>
>> + rdtgroup_assign_grp(rdtgrp);
>> +
>> kernfs_activate(rdtgrp->kn);
>>
>> /*
>> @@ -3849,6 +3902,8 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
>> if (ret)
>> goto out_closid_free;
>>
>> + rdtgroup_assign_grp(rdtgrp);
>> +
>> kernfs_activate(rdtgrp->kn);
>>
>> ret = rdtgroup_init_alloc(rdtgrp);
>> @@ -3874,6 +3929,7 @@ static int rdtgroup_mkdir_ctrl_mon(struct kernfs_node *parent_kn,
>> out_del_list:
>> list_del(&rdtgrp->rdtgroup_list);
>> out_rmid_free:
>> + rdtgroup_unassign_grp(rdtgrp);
>> mkdir_rdt_prepare_rmid_free(rdtgrp);
>> out_closid_free:
>> closid_free(closid);
>> @@ -3944,6 +4000,9 @@ static int rdtgroup_rmdir_mon(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
>> update_closid_rmid(tmpmask, NULL);
>>
>> rdtgrp->flags = RDT_DELETED;
>> +
>> + rdtgroup_unassign_grp(rdtgrp);
>> +
>> free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>>
>> /*
>> @@ -3990,6 +4049,8 @@ static int rdtgroup_rmdir_ctrl(struct rdtgroup *rdtgrp, cpumask_var_t tmpmask)
>> cpumask_or(tmpmask, tmpmask, &rdtgrp->cpu_mask);
>> update_closid_rmid(tmpmask, NULL);
>>
>> + rdtgroup_unassign_grp(rdtgrp);
>> +
>> free_rmid(rdtgrp->closid, rdtgrp->mon.rmid);
>> closid_free(rdtgrp->closid);
>>
>
> Apart from earlier comment about rdtgroup_assign_grp()/rdtgroup_unassign_grp() naming, please also
> take care about how these functions are integrated since it seems to be inconsistent wrt whether it is called
> on mon capable resource. Also, I can see how the counter is removed when CTRL_MON group and MON group are
> explicitly removed but it is not clear to me how when a user removes a CTRL_MON group how the counters
> assigned to its child MON groups are unassigned.
I think we have a problem here while removing. The child MON grou counters
may remain assigned when CTRL_MON is removed. Will fix it. Thanks for
pointing out.
--
Thanks
Babu Moger
Powered by blists - more mailing lists