lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 13 May 2020 16:44:53 +0530
From:   Parth Shah <parth@...ux.ibm.com>
To:     Dietmar Eggemann <dietmar.eggemann@....com>,
        linux-kernel@...r.kernel.org
Cc:     peterz@...radead.org, mingo@...hat.com, vincent.guittot@...aro.org,
        qais.yousef@....com, chris.hyser@...cle.com,
        pkondeti@...eaurora.org, patrick.bellasi@...bug.net,
        valentin.schneider@....com, David.Laight@...LAB.COM,
        pjt@...gle.com, pavel@....cz, tj@...nel.org,
        dhaval.giani@...cle.com, qperret@...gle.com,
        tim.c.chen@...ux.intel.com
Subject: Re: [PATCH v5 3/4] sched: Allow sched_{get,set}attr to change
 latency_nice of the task



On 5/13/20 3:11 PM, Parth Shah wrote:
> 
> 
> On 5/11/20 4:43 PM, Dietmar Eggemann wrote:
>> On 28/02/2020 10:07, Parth Shah wrote:
>>> Introduce the latency_nice attribute to sched_attr and provide a
>>> mechanism to change the value with the use of sched_setattr/sched_getattr
>>> syscall.
>>>
>>> Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in
>>> latency_nice of the task on every sched_setattr syscall.
>>>
>>> Signed-off-by: Parth Shah <parth@...ux.ibm.com>
>>> Reviewed-by: Qais Yousef <qais.yousef@....com>
>>
>> [...]
>>
>> ndif /* _UAPI_LINUX_SCHED_TYPES_H */
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index 866ea3d2d284..cd1fb9c8be26 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -4710,6 +4710,9 @@ static void __setscheduler_params(struct task_struct *p,
>>>  	p->rt_priority = attr->sched_priority;
>>>  	p->normal_prio = normal_prio(p);
>>>  	set_load_weight(p, true);
>>> +
>>> +	if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
>>> +		p->latency_nice = attr->sched_latency_nice;
>>>  }
>>
>> How do you make sure that p->latency_nice can be set independently from
>> p->static_prio?
>>
>> AFAICS, util_clamp achieves this by relying on SCHED_FLAG_KEEP_PARAMS,
>> so completely bypassing __setscheduler_params() and using it's own
>> __setscheduler_uclamp().
>>
> 
> Right. good catch.
> Use of SCHED_FLAG_LATENCY_NICE/SCHED_FLAG_ALL is must to change
> latency_nice value, but currently setting latency_nice value also changes
> static_prio.
> 
> One possible solution here is to move the above code to _setscheduler():
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 6031ec58c7ae..44bcbf060718 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4731,9 +4731,6 @@ static void __setscheduler_params(struct task_struct *p,
>         p->rt_priority = attr->sched_priority;
>         p->normal_prio = normal_prio(p);
>         set_load_weight(p, true);
> -
> -       if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
> -               p->latency_nice = attr->sched_latency_nice;
>  }
> 
>  /* Actually do priority change: must hold pi & rq lock. */
> @@ -4749,6 +4746,13 @@ static void __setscheduler(struct rq *rq, struct
> task_struct *p,
> 
>         __setscheduler_params(p, attr);
> 
> +       /*
> +        * Change latency_nice value only when SCHED_FLAG_LATENCY_NICE or
> +        * SCHED_FLAG_ALL sched_flag is set.
> +        */
> +       if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
> +               p->latency_nice = attr->sched_latency_nice;
> +
> 
> This should allow setting value only on above flags, also restricts setting
> the value when SCHED_FLAG_KEEP_PARAMS/SCHED_FLAG_KEEP_ALL is passed.

and also get rid of __setscheduler_params(p, attr) when
attr->sched_flags == SCHED_FLAG_LATENCY_NICE

Other way is surely to bypass keep_param check just like UCLAMP.

> 
> 
> Thanks,
> Parth
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ