lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 23 Aug 2022 14:51:56 -0700
From:   Kuniyuki Iwashima <kuniyu@...zon.com>
To:     <daniel@...earbox.net>
CC:     <andrii@...nel.org>, <ast@...nel.org>, <bpf@...r.kernel.org>,
        <kuni1840@...il.com>, <kuniyu@...zon.com>, <netdev@...r.kernel.org>
Subject: Re: [PATCH v2 bpf] bpf: Fix a data-race around bpf_jit_limit.

From:   Daniel Borkmann <daniel@...earbox.net>
Date:   Tue, 23 Aug 2022 23:20:29 +0200
> On 8/23/22 8:12 PM, Kuniyuki Iwashima wrote:
> > While reading bpf_jit_limit, it can be changed concurrently.
> > Thus, we need to add READ_ONCE() to its reader.
> 
> For sake of a better/clearer commit message, please also provide data about the
> WRITE_ONCE() pairing that this READ_ONCE() targets. This seems to be the case in
> __do_proc_doulongvec_minmax() as far as I can see. For your 2nd sentence above
> please also include load-tearing as main motivation for your fix.

I'll add better description.
Thank you!

> 
> > Fixes: ede95a63b5e8 ("bpf: add bpf_jit_limit knob to restrict unpriv allocations")
> > Signed-off-by: Kuniyuki Iwashima <kuniyu@...zon.com>
> > ---
> > v2:
> >    * Drop other 3 patches (No change for this patch)
> > 
> > v1: https://lore.kernel.org/bpf/20220818042339.82992-1-kuniyu@amazon.com/
> > ---
> >   kernel/bpf/core.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index c1e10d088dbb..3d9eb3ae334c 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -971,7 +971,7 @@ pure_initcall(bpf_jit_charge_init);
> >   
> >   int bpf_jit_charge_modmem(u32 size)
> >   {
> > -	if (atomic_long_add_return(size, &bpf_jit_current) > bpf_jit_limit) {
> > +	if (atomic_long_add_return(size, &bpf_jit_current) > READ_ONCE(bpf_jit_limit)) {
> >   		if (!bpf_capable()) {
> >   			atomic_long_sub(size, &bpf_jit_current);
> >   			return -EPERM;
> > 

Powered by blists - more mailing lists