lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 9 Jul 2019 17:46:18 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Chris Redpath <chris.redpath@....com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Morten Rasmussen <Morten.Rasmussen@....com>,
        Dietmar Eggemann <Dietmar.Eggemann@....com>
Subject: Re: [PATCH] sched/fair: Update rq_clock, cfs_rq before migrating for
 asym cpu capacity

On Tue, 9 Jul 2019 at 17:42, Chris Redpath <chris.redpath@...s.arm.com> wrote:
>
> On 09/07/2019 16:36, Vincent Guittot wrote:
> > Hi Chris,
> >
> >>
> >> We enter this code quite often in our testing, most individual runs of a
> >> test which has small tasks involved have at least one hit where we make
> >> a change to the clock with this patch in.
> >
> > Do you have a rt-app file that you can share ?
> >
>
> The ThreeSmallTasks test which is the worst hit produces this:
>
> {
>      "tasks": {
>          "small_0": {
>              "policy": "SCHED_OTHER",
>              "delay": 0,
>              "loop": 1,
>              "phases": {
>                  "p000001": {
>                      "loop": 62,
>                      "run": 2880,
>                      "timer": {
>                          "ref": "small_0",
>                          "period": 16000
>                      }
>                  }
>              }
>          },
>          "small_1": {
>              "policy": "SCHED_OTHER",
>              "delay": 0,
>              "loop": 1,
>              "phases": {
>                  "p000001": {
>                      "loop": 62,
>                      "run": 2880,
>                      "timer": {
>                          "ref": "small_1",
>                          "period": 16000
>                      }
>                  }
>              }
>          },
>          "small_2": {
>              "policy": "SCHED_OTHER",
>              "delay": 0,
>              "loop": 1,
>              "phases": {
>                  "p000001": {
>                      "loop": 62,
>                      "run": 2880,
>                      "timer": {
>                          "ref": "small_2",
>                          "period": 16000
>                      }
>                  }
>              }
>          }
>      },
>      "global": {
>          "default_policy": "SCHED_OTHER",
>          "duration": -1,
>          "calibration": 264,
>          "logdir": "/root/devlib-target"
>      }
> }
>
> when I run it

Thanks I will make it a try on my b.L platform

>
> >>
> >> That said - despite the relatively high number of hits only about 5% of
> >> runs see enough additional energy consumed to trigger a test failure. We
> >> do try to keep a quiet system as much as possible and only run for a few
> >> seconds so the impact we see in testing is also probably higher than in
> >> the real world.
> >
> > Yeah, I'm curious to see the impact on a real system which have a
> > 60fps screen update like an android phone
> >
>
> I wouldn't expect much change there but I would on the idle-ish
> homescreen/day-of-use type benchmarks.
>
> If I had a platform with any kind of representative energy use, I'd test
> it :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ