lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Feb 2020 01:12:41 +0100
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Qian Cai <cai@....pw>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>, elver@...gle.com,
        Linux PM <linux-pm@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -next] power/qos: fix a data race in pm_qos_*_value

On Fri, Feb 21, 2020 at 10:09 PM Qian Cai <cai@....pw> wrote:
>
> cpu_latency_constraints.target_value could be accessed concurrently as
> noticed by KCSAN,

Yes, they could, pretty much by design.

So why *exactly* is this a problem?

>  LTP: starting ppoll01
>  BUG: KCSAN: data-race in cpu_latency_qos_limit / pm_qos_update_target

It may be a bug under certain conditions, but you don't mention what
conditions they are.  Reporting it as a general bug is not accurate at
the very least.

>  write to 0xffffffff99081470 of 4 bytes by task 27532 on cpu 2:
>   pm_qos_update_target+0xa4/0x370
>   pm_qos_set_value at kernel/power/qos.c:78
>   cpu_latency_qos_apply+0x3b/0x50
>   cpu_latency_qos_remove_request+0xea/0x270
>   cpu_latency_qos_release+0x4b/0x70
>   __fput+0x187/0x3d0
>   ____fput+0x1e/0x30
>   task_work_run+0xbf/0x130
>   do_exit+0xa78/0xfd0
>   do_group_exit+0x8b/0x180
>   __x64_sys_exit_group+0x2e/0x30
>   do_syscall_64+0x91/0xb05
>   entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
>  read to 0xffffffff99081470 of 4 bytes by task 0 on cpu 41:
>   cpu_latency_qos_limit+0x1f/0x30
>   pm_qos_read_value at kernel/power/qos.c:55
>   cpuidle_governor_latency_req+0x4f/0x80
>   cpuidle_governor_latency_req at drivers/cpuidle/governor.c:114
>   menu_select+0x6b/0xc29
>   cpuidle_select+0x50/0x70
>   do_idle+0x214/0x280
>   cpu_startup_entry+0x1d/0x1f
>   start_secondary+0x1b2/0x230
>   secondary_startup_64+0xb6/0xc0
>
>  Reported by Kernel Concurrency Sanitizer on:
>  CPU: 41 PID: 0 Comm: swapper/41 Tainted: G L 5.6.0-rc2-next-20200221+ #7
>  Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
>
> The read is outside pm_qos_lock critical section which results in a data
> race.

This is purely theoretical AFAICS and so it should be presented this way.

Also the call traces above don't add much value to the changelog, so
maybe try to explain what the problem is in English.

> Fix it by adding a pair of READ|WRITE_ONCE().
>
> Signed-off-by: Qian Cai <cai@....pw>
> ---
>  kernel/power/qos.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
> index 32927682bcc4..db0bed2cae26 100644
> --- a/kernel/power/qos.c
> +++ b/kernel/power/qos.c
> @@ -52,7 +52,7 @@
>   */
>  s32 pm_qos_read_value(struct pm_qos_constraints *c)
>  {
> -       return c->target_value;
> +       return READ_ONCE(c->target_value);
>  }
>
>  static int pm_qos_get_value(struct pm_qos_constraints *c)
> @@ -75,7 +75,7 @@ static int pm_qos_get_value(struct pm_qos_constraints *c)
>
>  static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
>  {
> -       c->target_value = value;
> +       WRITE_ONCE(c->target_value, value);
>  }
>
>  /**
> --
> 1.8.3.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ