lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 16 Jun 2021 17:24:36 +0530
From:   Viresh Kumar <viresh.kumar@...aro.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Rafael Wysocki <rjw@...ysocki.net>,
        Ionela Voinescu <ionela.voinescu@....com>,
        Ben Segall <bsegall@...gle.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Ingo Molnar <mingo@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Juri Lelli <juri.lelli@...hat.com>,
        Mel Gorman <mgorman@...e.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Sudeep Holla <sudeep.holla@....com>,
        "open list:THERMAL" <linux-pm@...r.kernel.org>,
        Qian Cai <quic_qiancai@...cinc.com>,
        ACPI Devel Maling List <linux-acpi@...r.kernel.org>,
        Linux Doc Mailing List <linux-doc@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        "Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH V2 0/3] cpufreq: cppc: Add support for frequency
 invariance

On 16-06-21, 12:02, Vincent Guittot wrote:
> I tested your branch and got the following while booting:
> 
> [   24.454543] zswap: loaded using pool lzo/zbud
> [   24.454753] pstore: Using crash dump compression: deflate
> [   24.454776] AppArmor: AppArmor sha1 policy hashing enabled
> [   24.454784] ima: No TPM chip found, activating TPM-bypass!
> [   24.454789] ima: Allocated hash algorithm: sha256
> [   24.454801] ima: No architecture policies found
> [   24.455750] pcieport 0000:0f:00.0: Adding to iommu group 0
> [   24.893888] ------------[ cut here ]------------
> [   24.893891] WARNING: CPU: 95 PID: 1442 at
> drivers/cpufreq/cppc_cpufreq.c:123 cppc_scale_freq_workfn+0xc8/0xf8
> [   24.893901] Modules linked in:
> [   24.893906] CPU: 95 PID: 1442 Comm: cppc_fie Not tainted 5.13.0-rc6+ #359
> [   24.893910] Hardware name: To be filled by O.E.M. Saber/Saber, BIOS
> 0ACKL026 03/19/2019
> [   24.893912] pstate: 20400009 (nzCv daif +PAN -UAO -TCO BTYPE=--)
> [   24.893915] pc : cppc_scale_freq_workfn+0xc8/0xf8
> [   24.893918] lr : cppc_scale_freq_workfn+0x5c/0xf8
> [   24.893921] sp : ffff80003727bd90
> [   24.893922] x29: ffff80003727bd90 x28: 0000000000000000 x27: ffff800010ec2000
> [   24.893928] x26: ffff800010ec2000 x25: ffff8000107c3d90 x24: 0000000000000001
> [   24.893932] x23: ffff000816244880 x22: ffff8000113f9000 x21: ffff009f825a0a80
> [   24.893935] x20: ffff009efc394220 x19: ffff800011199000 x18: 000000000000001b
> [   24.893939] x17: 0000000000000007 x16: 0000000000000001 x15: 00000000000000bf
> [   24.893943] x14: 0000000000000016 x13: 000000000000029b x12: 0000000000000016
> [   24.893946] x11: 0000000000000000 x10: 0000000000000000 x9 : ffff009efc6958c0
> [   24.893950] x8 : ffff009efc394248 x7 : 0000000002bde780 x6 : 00000000ffffffff
> [   24.893954] x5 : 00000000916e502a x4 : 00000000d9730e80 x3 : ffffffffffffffff
> [   24.893958] x2 : 00000000001e8480 x1 : 00000000002625a0 x0 : 0000000000000401
> [   24.893962] Call trace:
> [   24.893964]  cppc_scale_freq_workfn+0xc8/0xf8
> [   24.893967]  kthread_worker_fn+0x110/0x318
> [   24.893971]  kthread+0xf4/0x120
> [   24.893973]  ret_from_fork+0x10/0x18
> [   24.893977] ---[ end trace ea6dbaf832bce3e4 ]---

Thanks Vincent.

This is triggering from cppc_scale_freq_workfn():

        if (WARN_ON(local_freq_scale > 1024))

Looks like there is something fishy about the perf calculations here
after reading the counters, we tried to scale that in the range 0-1024
and it came larger than that.

Will keep you posted.

-- 
viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ