lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <dc0712e4-f66f-a92b-fbf9-a3a84cf982a6@linux.ibm.com>
Date:   Thu, 19 Sep 2019 22:11:40 +0530
From:   Parth Shah <parth@...ux.ibm.com>
To:     Valentin Schneider <valentin.schneider@....com>,
        Patrick Bellasi <patrick.bellasi@....com>
Cc:     linux-kernel@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        subhra mazumdar <subhra.mazumdar@...cle.com>,
        tim.c.chen@...ux.intel.com, mingo@...hat.com,
        morten.rasmussen@....com, dietmar.eggemann@....com, pjt@...gle.com,
        vincent.guittot@...aro.org, quentin.perret@....com,
        dhaval.giani@...cle.com, daniel.lezcano@...aro.org, tj@...nel.org,
        rafael.j.wysocki@...el.com, qais.yousef@....com,
        Patrick Bellasi <patrick.bellasi@...bug.net>
Subject: Re: Usecases for the per-task latency-nice attribute



On 9/18/19 9:12 PM, Valentin Schneider wrote:
> On 18/09/2019 15:18, Patrick Bellasi wrote:
>>> 1. Name: What should be the name for such attr for all the possible usecases?
>>> =============
>>> Latency nice is the proposed name as of now where the lower value indicates
>>> that the task doesn't care much for the latency
>>
>> If by "lower value" you mean -19 (in the proposed [-20,19] range), then
>> I think the meaning should be the opposite.
>>
>> A -19 latency-nice task is a task which is not willing to give up
>> latency. For those tasks for example we want to reduce the wake-up
>> latency at maximum.
>>
>> This will keep its semantic aligned to that of process niceness values
>> which range from -20 (most favourable to the process) to 19 (least
>> favourable to the process).
>>
> 
> I don't want to start a bikeshedding session here, but I agree with Parth
> on the interpretation of the values.
> 
> I've always read niceness values as
> -20 (least nice to the system / other processes)
> +19 (most nice to the system / other processes)
> 
> So following this trend I'd see for latency-nice:


So jotting down separately, in case if we think to have "latency-nice"
terminology, then we might need to select one of the 2 interpretation:

1).
> -20 (least nice to latency, i.e. sacrifice latency for throughput)
> +19 (most nice to latency, i.e. sacrifice throughput for latency)
> 

2).
-20 (least nice to other task in terms of sacrificing latency, i.e.
latency-sensitive)
+19 (most nice to other tasks in terms of sacrificing latency, i.e.
latency-forgoing)


> However...
> 
>>> But there seems to be a bit of confusion on whether we want biasing as well
>>> (latency-biased) or something similar, in which case "latency-nice" may
>>> confuse the end-user.
>>
>> AFAIU PeterZ point was "just" that if we call it "-nice" it has to
>> behave as "nice values" to avoid confusions to users. But, if we come up
>> with a different naming maybe we will have more freedom.
>>
> 
> ...just getting rid of the "-nice" would leave us free not to have to
> interpret the values as "nice to / not nice to" :)
> 
>> Personally, I like both "latency-nice" or "latency-tolerant", where:
>>
>>  - latency-nice:
>>    should have a better understanding based on pre-existing concepts
>>
>>  - latency-tolerant:
>>    decouples a bit its meaning from the niceness thus giving maybe a bit
>>    more freedom in its complete definition and perhaps avoid any
>>    possible interpretation confusion like the one I commented above.
>>
>> Fun fact: there was also the latency-nasty proposal from PaulMK :)
>>
> 
> [...]
> 
>>
>> $> Wakeup path tunings
>> ==========================
>>
>> Some additional possible use-cases was already discussed in [3]:
>>
>>  - dynamically tune the policy of a task among SCHED_{OTHER,BATCH,IDLE}
>>    depending on crossing certain pre-configured threshold of latency
>>    niceness.
>>   
>>  - dynamically bias the vruntime updates we do in place_entity()
>>    depending on the actual latency niceness of a task.
>>   
>>    PeterZ thinks this is dangerous but that we can "(carefully) fumble a
>>    bit there."
>>   
>>  - bias the decisions we take in check_preempt_tick() still depending
>>    on a relative comparison of the current and wakeup task latency
>>    niceness values.
> 
> Aren't we missing the point about tweaking the sched domain scans (which
> AFAIR was the original point for latency-nice)?
> 
> Something like default value is current behaviour and
> - Being less latency-sensitive means increasing the scans (e.g. trending
>   towards only going through the slow wakeup-path at the extreme setting)
> - Being more latency-sensitive means reducing the scans (e.g. trending
>   towards a fraction of the domain scanned in the fast-path at the extreme
>   setting).
> 

Correct. But I was pondering upon the values required for this case.
Is having just a range from [-20,19] even for larger system sufficient enough?

>>
> 
> $> Load balance tuning
> ======================
> 
> Already mentioned these in [4]:
> 
> - Increase (reduce) nr_balance_failed threshold when trying to active
>   balance a latency-sensitive (non-latency-sensitive) task.
> 
> - Increase (decrease) sched_migration_cost factor in task_hot() for
>   latency-sensitive (non-latency-sensitive) tasks.
> 

Thanks for listing down your ideas.

These are pretty useful optimization in general. But one may wonder if we
reduce the search scans for idle-core in wake-up path and by-chance selects
the busy core, then one would expect load balancer to move the task to idle
core.

If I got it correct, the in such cases, the sched_migration_cost should be
carefully increased, right?


>>> References:
>>> ===========
>>> [1]. https://lkml.org/lkml/2019/8/30/829
>>> [2]. https://lkml.org/lkml/2019/7/25/296
>>
>>   [3]. Message-ID: <20190905114709.GM2349@...ez.programming.kicks-ass.net>
>>        https://lore.kernel.org/lkml/20190905114709.GM2349@hirez.programming.kicks-ass.net/
>>
> 
> [4]: https://lkml.kernel.org/r/3d3306e4-3a78-5322-df69-7665cf01cc43@arm.com
> 
>>
>> Best,
>> Patrick
>>

Thanks,
Parth

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ