lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANpmjNOGbSsXLqM59HQJ04T4ueMWjQjzpt4QqyKpne=KbHWREg@mail.gmail.com>
Date:   Fri, 10 Mar 2023 00:19:35 +0100
From:   Marco Elver <elver@...gle.com>
To:     paulmck@...nel.org
Cc:     Mark Rutland <mark.rutland@....com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Alexander Potapenko <glider@...gle.com>,
        Boqun Feng <boqun.feng@...il.com>, kasan-dev@...glegroups.com,
        linux-kernel@...r.kernel.org, Haibo Li <haibo.li@...iatek.com>,
        stable@...r.kernel.org
Subject: Re: [PATCH] kcsan: Avoid READ_ONCE() in read_instrumented_memory()

On Thu, 9 Mar 2023 at 23:08, Paul E. McKenney <paulmck@...nel.org> wrote:
>
> On Thu, Mar 09, 2023 at 11:17:52AM +0100, Marco Elver wrote:
> > Haibo Li reported:
> >
> >  | Unable to handle kernel paging request at virtual address
> >  |   ffffff802a0d8d7171
> >  | Mem abort info:o:
> >  |   ESR = 0x9600002121
> >  |   EC = 0x25: DABT (current EL), IL = 32 bitsts
> >  |   SET = 0, FnV = 0 0
> >  |   EA = 0, S1PTW = 0 0
> >  |   FSC = 0x21: alignment fault
> >  | Data abort info:o:
> >  |   ISV = 0, ISS = 0x0000002121
> >  |   CM = 0, WnR = 0 0
> >  | swapper pgtable: 4k pages, 39-bit VAs, pgdp=000000002835200000
> >  | [ffffff802a0d8d71] pgd=180000005fbf9003, p4d=180000005fbf9003,
> >  | pud=180000005fbf9003, pmd=180000005fbe8003, pte=006800002a0d8707
> >  | Internal error: Oops: 96000021 [#1] PREEMPT SMP
> >  | Modules linked in:
> >  | CPU: 2 PID: 45 Comm: kworker/u8:2 Not tainted
> >  |   5.15.78-android13-8-g63561175bbda-dirty #1
> >  | ...
> >  | pc : kcsan_setup_watchpoint+0x26c/0x6bc
> >  | lr : kcsan_setup_watchpoint+0x88/0x6bc
> >  | sp : ffffffc00ab4b7f0
> >  | x29: ffffffc00ab4b800 x28: ffffff80294fe588 x27: 0000000000000001
> >  | x26: 0000000000000019 x25: 0000000000000001 x24: ffffff80294fdb80
> >  | x23: 0000000000000000 x22: ffffffc00a70fb68 x21: ffffff802a0d8d71
> >  | x20: 0000000000000002 x19: 0000000000000000 x18: ffffffc00a9bd060
> >  | x17: 0000000000000001 x16: 0000000000000000 x15: ffffffc00a59f000
> >  | x14: 0000000000000001 x13: 0000000000000000 x12: ffffffc00a70faa0
> >  | x11: 00000000aaaaaaab x10: 0000000000000054 x9 : ffffffc00839adf8
> >  | x8 : ffffffc009b4cf00 x7 : 0000000000000000 x6 : 0000000000000007
> >  | x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffffffc00a70fb70
> >  | x2 : 0005ff802a0d8d71 x1 : 0000000000000000 x0 : 0000000000000000
> >  | Call trace:
> >  |  kcsan_setup_watchpoint+0x26c/0x6bc
> >  |  __tsan_read2+0x1f0/0x234
> >  |  inflate_fast+0x498/0x750
> >  |  zlib_inflate+0x1304/0x2384
> >  |  __gunzip+0x3a0/0x45c
> >  |  gunzip+0x20/0x30
> >  |  unpack_to_rootfs+0x2a8/0x3fc
> >  |  do_populate_rootfs+0xe8/0x11c
> >  |  async_run_entry_fn+0x58/0x1bc
> >  |  process_one_work+0x3ec/0x738
> >  |  worker_thread+0x4c4/0x838
> >  |  kthread+0x20c/0x258
> >  |  ret_from_fork+0x10/0x20
> >  | Code: b8bfc2a8 2a0803f7 14000007 d503249f (78bfc2a8) )
> >  | ---[ end trace 613a943cb0a572b6 ]-----
> >
> > The reason for this is that on certain arm64 configuration since
> > e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when
> > CONFIG_LTO=y"), READ_ONCE() may be promoted to a full atomic acquire
> > instruction which cannot be used on unaligned addresses.
> >
> > Fix it by avoiding READ_ONCE() in read_instrumented_memory(), and simply
> > forcing the compiler to do the required access by casting to the
> > appropriate volatile type. In terms of generated code this currently
> > only affects architectures that do not use the default READ_ONCE()
> > implementation.
> >
> > The only downside is that we are not guaranteed atomicity of the access
> > itself, although on most architectures a plain load up to machine word
> > size should still be atomic (a fact the default READ_ONCE() still relies
> > on itself).
> >
> > Reported-by: Haibo Li <haibo.li@...iatek.com>
> > Tested-by: Haibo Li <haibo.li@...iatek.com>
> > Cc: <stable@...r.kernel.org> # 5.17+
> > Signed-off-by: Marco Elver <elver@...gle.com>
>
> Queued, thank you!
>
> This one looks like it might want to go into v6.4 rather than later.

Yes, I think that'd be appropriate - thank you!

Thanks,
-- Marco

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ