lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250620185248.634101cc@nowhere>
Date: Fri, 20 Jun 2025 18:52:48 +0200
From: luca abeni <luca.abeni@...tannapisa.it>
To: Juri Lelli <juri.lelli@...hat.com>
Cc: Marcel Ziswiler <marcel.ziswiler@...ethink.co.uk>,
 linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>, Peter
 Zijlstra <peterz@...radead.org>, Vineeth Pillai <vineeth@...byteword.org>
Subject: Re: SCHED_DEADLINE tasks missing their deadline with
 SCHED_FLAG_RECLAIM jobs in the mix (using GRUB)

On Fri, 20 Jun 2025 17:28:28 +0200
Juri Lelli <juri.lelli@...hat.com> wrote:

> On 20/06/25 16:16, luca abeni wrote:
[...]
> > So, I had a look tying to to remember the situation... This is my
> > current understanding:
> > - the max_bw field should be just the maximum amount of CPU
> > bandwidth we want to use with reclaiming... It is rt_runtime_us /
> > rt_period_us; I guess it is cached in this field just to avoid
> > computing it every time.
> >   So, max_bw should be updated only when
> >   /proc/sys/kernel/sched_rt_{runtime,period}_us are written
> > - the extra_bw field represents an additional amount of CPU
> > bandwidth we can reclaim on each core (the original m-GRUB
> > algorithm just reclaimed Uinact, the utilization of inactive tasks).
> >   It is initialized to Umax when no SCHED_DEADLINE tasks exist and  
> 
> Is Umax == max_bw from above?

Yes; sorry about the confusion


> >   should be decreased by Ui when a task with utilization Ui becomes
> >   SCHED_DEADLINE (and increased by Ui when the SCHED_DEADLINE task
> >   terminates or changes scheduling policy). Since this value is
> >   per_core, Ui is divided by the number of cores in the root
> > domain... From what you write, I guess extra_bw is not correctly
> >   initialized/updated when a new root domain is created?  
> 
> It looks like so yeah. After boot and when domains are dinamically
> created. But, I am still not 100%, I only see weird numbers that I
> struggle to relate with what you say above. :)

BTW, when running some tests on different machines I think I found out
that 6.11 does not exhibit this issue (this needs to be confirmed, I am
working on reproducing the test with different kernels on the same
machine)

If I manage to reproduce this result, I think I can run a bisect to the
commit introducing the issue (git is telling me that I'll need about 15
tests :)
So, stay tuned...


> > All this information is probably not properly documented... Should I
> > improve the description in
> > Documentation/scheduler/sched-deadline.rst or do you prefer some
> > comments in kernel/sched/deadline.c? (or .h?)  
> 
> I think ideally both. sched-deadline.rst should probably contain the
> whole picture with more information and .c/.h the condendensed
> version.

OK, I'll try to do this in next week


			Thanks,
				Luca

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ