[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d8585fe-ab2f-6a06-bfc0-47569d755c69@amd.com>
Date: Mon, 20 Mar 2023 13:29:56 -0500
From: "Moger, Babu" <babu.moger@....com>
To: Reinette Chatre <reinette.chatre@...el.com>,
"corbet@....net" <corbet@....net>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>
Cc: "fenghua.yu@...el.com" <fenghua.yu@...el.com>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"x86@...nel.org" <x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>,
"paulmck@...nel.org" <paulmck@...nel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"quic_neeraju@...cinc.com" <quic_neeraju@...cinc.com>,
"rdunlap@...radead.org" <rdunlap@...radead.org>,
"damien.lemoal@...nsource.wdc.com" <damien.lemoal@...nsource.wdc.com>,
"songmuchun@...edance.com" <songmuchun@...edance.com>,
"peterz@...radead.org" <peterz@...radead.org>,
"jpoimboe@...nel.org" <jpoimboe@...nel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"chang.seok.bae@...el.com" <chang.seok.bae@...el.com>,
"pawan.kumar.gupta@...ux.intel.com"
<pawan.kumar.gupta@...ux.intel.com>,
"jmattson@...gle.com" <jmattson@...gle.com>,
"daniel.sneddon@...ux.intel.com" <daniel.sneddon@...ux.intel.com>,
"Das1, Sandipan" <Sandipan.Das@....com>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"james.morse@....com" <james.morse@....com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bagasdotme@...il.com" <bagasdotme@...il.com>,
"eranian@...gle.com" <eranian@...gle.com>,
"christophe.leroy@...roup.eu" <christophe.leroy@...roup.eu>,
"jarkko@...nel.org" <jarkko@...nel.org>,
"adrian.hunter@...el.com" <adrian.hunter@...el.com>,
"quic_jiles@...cinc.com" <quic_jiles@...cinc.com>,
"peternewman@...gle.com" <peternewman@...gle.com>
Subject: Re: [PATCH v3 1/7] x86/resctrl: Add multiple tasks to the resctrl
group at once
Hi Reinette,
On 3/20/23 11:52, Reinette Chatre wrote:
> Hi Babu,
>
> On 3/20/2023 8:07 AM, Moger, Babu wrote:
>> On 3/16/23 15:33, Reinette Chatre wrote:
>>> On 3/16/2023 12:51 PM, Moger, Babu wrote:
>>>> On 3/16/23 12:12, Reinette Chatre wrote:
>>>>> On 3/16/2023 9:27 AM, Moger, Babu wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: Reinette Chatre <reinette.chatre@...el.com>
>>>>>>> Sent: Wednesday, March 15, 2023 1:33 PM
>>>>>>> To: Moger, Babu <Babu.Moger@....com>; corbet@....net;
>>>>>>> tglx@...utronix.de; mingo@...hat.com; bp@...en8.de
>>>>>>> Cc: fenghua.yu@...el.com; dave.hansen@...ux.intel.com; x86@...nel.org;
>>>>>>> hpa@...or.com; paulmck@...nel.org; akpm@...ux-foundation.org;
>>>>>>> quic_neeraju@...cinc.com; rdunlap@...radead.org;
>>>>>>> damien.lemoal@...nsource.wdc.com; songmuchun@...edance.com;
>>>>>>> peterz@...radead.org; jpoimboe@...nel.org; pbonzini@...hat.com;
>>>>>>> chang.seok.bae@...el.com; pawan.kumar.gupta@...ux.intel.com;
>>>>>>> jmattson@...gle.com; daniel.sneddon@...ux.intel.com; Das1, Sandipan
>>>>>>> <Sandipan.Das@....com>; tony.luck@...el.com; james.morse@....com;
>>>>>>> linux-doc@...r.kernel.org; linux-kernel@...r.kernel.org;
>>>>>>> bagasdotme@...il.com; eranian@...gle.com; christophe.leroy@...roup.eu;
>>>>>>> jarkko@...nel.org; adrian.hunter@...el.com; quic_jiles@...cinc.com;
>>>>>>> peternewman@...gle.com
>>>>>>> Subject: Re: [PATCH v3 1/7] x86/resctrl: Add multiple tasks to the resctrl group
>>>>>>> at once
>>>>>>>
>>>>>>> Hi Babu,
>>>>>>>
>>>>>>> On 3/2/2023 12:24 PM, Babu Moger wrote:
>>>>>>>> The resctrl task assignment for MONITOR or CONTROL group needs to be
>>>>>>>> done one at a time. For example:
>>>>>>>>
>>>>>>>> $mount -t resctrl resctrl /sys/fs/resctrl/
>>>>>>>> $mkdir /sys/fs/resctrl/clos1
>>>>>>>> $echo 123 > /sys/fs/resctrl/clos1/tasks
>>>>>>>> $echo 456 > /sys/fs/resctrl/clos1/tasks
>>>>>>>> $echo 789 > /sys/fs/resctrl/clos1/tasks
>>>>>>>>
>>>>>>>> This is not user-friendly when dealing with hundreds of tasks. Also,
>>>>>>>> there is a syscall overhead for each command executed from user space.
>>>>>>>
>>>>>>> To support this change it may also be helpful to add that moving tasks take the
>>>>>>> mutex so attempting to move tasks in parallel will not achieve a significant
>>>>>>> performance gain.
>>>>>>
>>>>>> Agree. It may not be significant performance gain. Will remove this line.
>>>>>
>>>>> It does not sound as though you are actually responding to my comment.
>>>>
>>>> I am confused. I am already saying there is syscall overhead for each
>>>> command if we move the tasks one by one. Now do you want me to add "moving
>>>> tasks take the mutex so attempting to move tasks in parallel will not
>>>> achieve a significant performance gain".
>>>>
>>>> It is contradictory, So, I wanted to remove the line about performance.
>>>> Did I still miss something?
>>>
>>> Where is the contradiction?
>>>
>>> Consider your example:
>>> $echo 123 > /sys/fs/resctrl/clos1/tasks
>>> $echo 456 > /sys/fs/resctrl/clos1/tasks
>>> $echo 789 > /sys/fs/resctrl/clos1/tasks
>>>
>>> Yes, there is syscall overhead for each of the above lines. My statement was in
>>> support of this work by stating that a user aiming to improve performance by
>>> attempting the above in parallel would not be able to see achieve significant
>>> performance gain since the calls would end up being serialized.
>>
>> ok. Sure. Will add the text. I may modify little bit.
>>>
>>> You are providing two motivations (a) "user-friendly when dealing with
>>> hundreds of tasks", and (b) syscall overhead. Have you measured the
>>> improvement this solution provides?
>>
>> No. I have not measured the performance improvement.
>
> The changelog makes a claim that the current implementation has overhead
> that is removed with this change. There is no data to support this claim.
My main motivation for this change is to make it user-friendly. So that
users can search the pid's and assign multiple tasks at a time. Originally
I did not have the line for performance. Actually, I don't want to claim
performance benefits. I will remove the performance claims.
>
> ...
>
>>>>>>>> +
>>>>>>>> + buf[nbytes - 1] = '\0';
>>>>>>>> +
>>>>>>>> rdtgrp = rdtgroup_kn_lock_live(of->kn);
>>>>>>>> if (!rdtgrp) {
>>>>>>>> rdtgroup_kn_unlock(of->kn);
>>>>>>>> return -ENOENT;
>>>>>>>> }
>>>>>>>> +
>>>>>>>> +next:
>>>>>>>> + if (!buf || buf[0] == '\0')
>>>>>>>> + goto unlock;
>>>>>>>> +
>>>>>>>> + pid_str = strim(strsep(&buf, ","));
>>>>>>>> +
>>>>>>>
>>>>>>> Could lib/cmdline.c:get_option() be useful?
>>>>>>
>>>>>> Yes. We could that also. May not be required for the simple case like this.
>>>>>
>>>>> Please keep an eye out for how much of it you end up duplicating ....
>>>>
>>>> Using the get_options will require at least two calls(one to get the
>>>> length and then read the integers). Also need to allocate the integers
>>>> array dynamically. That is lot code if we are going that route.
>>>>
>>>
>>> I did not ask about get_options(), I asked about get_option().
>>
>> If you insist, will use get_option. But we still have to loop thru all the
>> string till get_option returns 0. I can try that.
>
>
> I just asked whether get_option() could be useful. Could you please point out what
> I said that made you think that I insist on this change being made? If it matches
> your usage, then know it is available, if it does not, then don't use it.
Ok. I dont see a major benefit using get_option here. So, not planning to
to use it.
>
> ...
>
>>>> I can say "The failure pid will be logged in
>>>> /sys/fs/resctrl/info/last_cmd_status file."
>>>
>>> That will not be accurate. Not all errors include the pid.
>>
>> Can you please suggest?
>
> last_cmd_status provides a 512 char buffer to communicate details
> to the user. The buffer is cleared before the loop that moves all the
> tasks start. If an error is encountered, a detailed message is written
> to the buffer. One option may be to append a string to the buffer that
> includes the pid? Perhaps something like:
> rdt_last_cmd_printf("Error encountered while moving task %d\n", pid);
ok. Will try to add and test it.
>
> Please feel free to improve.
>
> Reinette
>
>
--
Thanks
Babu Moger
Powered by blists - more mailing lists