lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Jan 2021 13:07:14 -0500
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     paulmck <paulmck@...nel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Matt Mullins <mmullins@...x.us>,
        Ingo Molnar <mingo@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Andrii Nakryiko <andriin@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...omium.org>,
        Kees Cook <keescook@...omium.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Alexey Kardashevskiy <aik@...abs.ru>
Subject: Re: [PATCH v4] tracepoint: Do not fail unregistering a probe due to
 memory failure

On Wed, 27 Jan 2021 13:00:46 -0500 (EST)
Mathieu Desnoyers <mathieu.desnoyers@...icios.com> wrote:

> > Instead of allocating a new array for removing a tracepoint, allocate twice
> > the needed size when adding tracepoints to the array. On removing, use the
> > second half of the allocated array. This removes the need to allocate memory
> > for removing a tracepoint, as the allocation for removals will already have
> > been done.  
> 
> I don't see how this can work reliably. AFAIU, with RCU, approaches
> requiring a pre-allocation of twice the size and swapping to the alternate
> memory area on removal falls apart whenever you remove 2 or more elements
> back-to-back without waiting for a grace period.

Good point ;-)

> 
> How is this handled by your scheme ?

I believe we can detect this case using the "prio" part of extra element,
and force a rcu sync if there's back to back removals on the same
tracepoint. That case does not happen often, so I'm hoping nobody will
notice the slowdown with these syncs. I'll take a look at this.

Thanks for bringing that up.

-- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ