lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71bf9ee3-859c-4c3e-9db4-38c1ab35440a@linux.ibm.com>
Date: Sun, 13 Jul 2025 23:47:23 +0530
From: Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
        Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        vschneid@...hat.com, dhaval@...nis.ca, linux-kernel@...r.kernel.org,
        Madadi Vineeth Reddy <vineethr@...ux.ibm.com>
Subject: Re: [PATCH v3 4/6] sched/fair: Limit run to parity to the min slice
 of enqueued entities

Hi Vincent, Peter

On 10/07/25 18:04, Peter Zijlstra wrote:
> 
>>> If I set my task’s custom slice to a larger value but another task has a smaller slice,
>>> this change will cap my protected window to the smaller slice. Does that mean my custom
>>> slice is no longer honored?
>>
>> What do you mean by honored ? EEVDF never mandates that a request of
>> size slice will be done in one go. Slice mainly defines the deadline
>> and orders the entities but not that it will always run your slice in
>> one go. Run to parity tries to minimize the number of context switches
>> between runnable tasks but must not break fairness and lag theorem.So
>> If your task A has a slice of 10ms and task B wakes up with a slice of
>> 1ms. B will preempt A because its deadline is earlier. If task B still
>> wants to run after its slice is exhausted, it will not be eligible and
>> task A will run until task B becomes eligible, which is as long as
>> task B's slice.
> 
> Right. Added if you don't want wakeup preemption, we've got SCHED_BATCH
> for you.

Thanks for the explanation. Understood now that slice is only for deadline
calculation and ordering for eligible tasks.

Before your patch, I observed that each task ran for its full custom slice
before preemption, which led me to assume that slice directly controlled
uninterrupted runtime.

With the patch series applied and RUN_TO_PARITY=true, I now see the expected behavior:
- Default slice (~2.8 ms): tasks run ~3 ms each.
- Increasing one task’s slice doesn’t extend its single‐run duration—it remains ~3 ms.
- Decreasing one tasks’ slice shortens everyone’s run to that new minimum.

With this patch series, With NO_RUN_TO_PARITY, I see runtimes near 1 ms (CONFIG_HZ=1000).

However, without your patches, I was still seeing ~3 ms runs even with NO_RUN_TO_PARITY,
which confused me because I expected runtime to drop to ~1 ms (preempt at every tick)
rather than run up to the default slice.

Without your patches and having RUN_TO_PARITY is as expected. Task running till it's
slice when eligible.

I ran these with 16 stress‑ng threads pinned to one CPU.

Please let me know if my understanding is incorrect, and why I was still seeing ~3 ms
runtimes with NO_RUN_TO_PARITY before this patch series.

Thanks,
Madadi Vineeth Reddy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ