lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <998d0e4a0802240512t859c5a1y3c30bd401530a43@mail.gmail.com>
Date:	Sun, 24 Feb 2008 14:12:47 +0100
From:	"J.C. Pizarro" <jcpiza@...il.com>
To:	"Rik van Riel" <riel@...hat.com>, "Mike Galbraith" <efault@....de>,
	LKML <linux-kernel@...r.kernel.org>,
	"Linus Torvalds" <torvalds@...ux-foundation.org>
Subject: Re: Please, put 64-bit counter per task and incr.by.one each ctxt switch.

Good morning :)

On 2008/2/24, Rik van Riel <riel@...hat.com> wrote:
> OK, one last reply on the (overly optimistic?) assumption that you are not a troll.
>  > +++ linux-2.6_git-20080224/include/linux/sched.h        2008-02-24
>  > 04:50:18.000000000 +0100
>  > @@ -1007,6 +1007,12 @@
>  >         struct hlist_head preempt_notifiers;
>  >  #endif
>  >
>  > +       unsigned long long ctxt_switch_counts; /* 64-bit switches' count */
>  > +       /* ToDo:
>  > +        *  To implement a poller/clock for CPU-scheduler that only reads
>  > +        *   these counts of context switches of the runqueue's tasks.
>  > +        *  No problem if this poller/clock is not implemented. */
>
> So you're introducing a statistic, but have not yet written any code
>  that uses it?

It's statistic, yes, but it's a very important parameter for the CPU-scheduler.
The CPU-scheduler will know the number of context switches of each task
 before of to take a blind decision into infinitum!.

Statistically, there are tasks X that have higher context switches and
tasks Y that have lower context switches in the last sized interval with the
 historical formula "(alpha-1)*prev + alpha*current" 0 < alpha < 1.
(measure this value V as a velocity of number of ctxt-switches/second too)

      Put more weight to X than to Y for more interactivity that X want.
      (X will have more higher V and Y more lower V).
      With an exception for avoid the eternal humble, to do sin(x) behaviour
       after of a long period of humble (later to modify the weights).

The missing code has to be implemented between everybodies because
1. Users wann't lose interactivity in overloaded CPU.
2. There are much code of CPU-schedulers bad organizated that i wann't
     touch it.

>  > +       p->ctxt_switch_counts = 0ULL; /* task's 64-bit counter inited 0 */
>
> Because we can all read C, there is no need to tell people in comments
>  what the code does.  Comments are there to explain why the code does
>  things, if an explanation is needed.

OK.

>  > >  > I will explain your later why of it.
>  > >
>  > > ... and explain exactly why the kernel needs this extra code.
>  >
>  > One reason: for the objective of gain interactivity, it's an issue that
>  >  CFS fair scheduler lacks it.

> Your patch does not actually help interactivity, because all it does
>  is add an irq spinlock in a hot path (bad idea) and a counter which
>  nothing reads.

Then remove the lock/unlock of the task that i'd put it,
i'm not sure if it's secure because i didn't read all the control of the road.

On 2008/2/24, Mike Galbraith <efault@....de> wrote:
>  > One reason: for the objective of gain interactivity, it's an issue that
>  >  CFS fair scheduler lacks it.
>
> A bug report would be a much better first step toward resolution of any
>  interactivity issues you're seeing than posts which do nothing but
>  suggest that there may be a problem.
>
>  First define the problem, _then_ fix it.

It's blind eternal problem in overloaded CPU scenario in the desktops.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ