lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200528225427.GA225299@google.com>
Date:   Thu, 28 May 2020 18:54:27 -0400
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Andrii Nakryiko <andriin@...com>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org, paulmck@...nel.org,
        stern@...land.harvard.edu, parri.andrea@...il.com, will@...nel.org,
        peterz@...radead.org, boqun.feng@...il.com, npiggin@...il.com,
        dhowells@...hat.com, j.alglave@....ac.uk, luc.maranget@...ia.fr,
        akiyks@...il.com, dlustig@...dia.com, linux-kernel@...r.kernel.org,
        linux-arch@...r.kernel.org, andrii.nakryiko@...il.com,
        kernel-team@...com
Subject: Re: [PATCH linux-rcu] docs/litmus-tests: add BPF ringbuf MPSC litmus
 tests

Hello Andrii,
This is quite exciting. Some comments below:

On Wed, May 27, 2020 at 11:24:08PM -0700, Andrii Nakryiko wrote:
[...]
> diff --git a/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+bounded.litmus b/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+bounded.litmus
> new file mode 100644
> index 000000000000..558f054fb0b4
> --- /dev/null
> +++ b/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+bounded.litmus
> @@ -0,0 +1,91 @@
> +C bpf-rb+1p1c+bounded
> +
> +(*
> + * Result: Always
> + *
> + * This litmus test validates BPF ring buffer implementation under the
> + * following assumptions:
> + * - 1 producer;
> + * - 1 consumer;
> + * - ring buffer has capacity for only 1 record.
> + *
> + * Expectations:
> + * - 1 record pushed into ring buffer;
> + * - 0 or 1 element is consumed.
> + * - no failures.
> + *)
> +
> +{
> +	atomic_t dropped;
> +}
> +
> +P0(int *lenFail, int *len1, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +
> +	rCx = smp_load_acquire(cx);
> +	rPx = smp_load_acquire(px);

Is it possible for you to put some more comments around which ACQUIRE is
paired with which RELEASE? And, in general more comments around the reason
for a certain memory barrier and what pairs with what. In the kernel sources,
the barriers needs a comment anyway.

> +	if (rCx < rPx) {
> +		if (rCx == 0) {
> +			rLenPtr = len1;
> +		} else {
> +			rLenPtr = lenFail;
> +			rFail = 1;
> +		}
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *lenFail, int *len1, spinlock_t *rb_lock, int *px, int *cx, atomic_t *dropped)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +
> +	rCx = smp_load_acquire(cx);
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx - rCx >= 1) {
> +		atomic_inc(dropped);

Why does 'dropped' need to be atomic if you are always incrementing under a
lock?

> +		spin_unlock(rb_lock);
> +	} else {
> +		if (rPx == 0) {
> +			rLenPtr = len1;
> +		} else {
> +			rLenPtr = lenFail;
> +			rFail = 1;
> +		}
> +
> +		*rLenPtr = -1;

Clarify please the need to set the length intermittently to -1. Thanks.

> +		smp_store_release(px, rPx + 1);
> +
> +		spin_unlock(rb_lock);
> +
> +		smp_store_release(rLenPtr, 1);
> +	}
> +}
> +
> +exists (
> +	0:rFail=0 /\ 1:rFail=0
> +	/\
> +	(
> +		(dropped=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> +	)
> +)
> diff --git a/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+unbound.litmus b/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+unbound.litmus
> new file mode 100644
> index 000000000000..7ab5d0e6e49f
> --- /dev/null
> +++ b/Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+unbound.litmus

I wish there was a way to pass args to litmus tests, then perhaps it would
have been possible to condense some of these tests. :-)

> diff --git a/Documentation/litmus-tests/bpf-rb/bpf-rb+2p1c+bounded.litmus b/Documentation/litmus-tests/bpf-rb/bpf-rb+2p1c+bounded.litmus
> new file mode 100644
> index 000000000000..83f80328c92b
> --- /dev/null
> +++ b/Documentation/litmus-tests/bpf-rb/bpf-rb+2p1c+bounded.litmus
[...]
> +P0(int *lenFail, int *len1, int *cx, int *px)
> +{
> +	int *rLenPtr;
> +	int rLen;
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +
> +	rFail = 0;
> +
> +	rCx = smp_load_acquire(cx);
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0) {
> +			rLenPtr = len1;
> +		} else if (rCx == 1) {
> +			rLenPtr = len1;
> +		} else {
> +			rLenPtr = lenFail;
> +			rFail = 1;
> +		}
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +
> +	rPx = smp_load_acquire(px);
> +	if (rCx < rPx) {
> +		if (rCx == 0) {
> +			rLenPtr = len1;
> +		} else if (rCx == 1) {
> +			rLenPtr = len1;
> +		} else {
> +			rLenPtr = lenFail;
> +			rFail = 1;
> +		}
> +
> +		rLen = smp_load_acquire(rLenPtr);
> +		if (rLen == 0) {
> +			rFail = 1;
> +		} else if (rLen == 1) {
> +			rCx = rCx + 1;
> +			smp_store_release(cx, rCx);
> +		}
> +	}
> +}
> +
> +P1(int *lenFail, int *len1, spinlock_t *rb_lock, int *px, int *cx, atomic_t *dropped)
> +{
> +	int rPx;
> +	int rCx;
> +	int rFail;
> +	int *rLenPtr;
> +
> +	rFail = 0;
> +	rLenPtr = lenFail;
> +
> +	rCx = smp_load_acquire(cx);
> +	spin_lock(rb_lock);
> +
> +	rPx = *px;
> +	if (rPx - rCx >= 1) {
> +		atomic_inc(dropped);
> +		spin_unlock(rb_lock);
> +	} else {
> +		if (rPx == 0) {
> +			rLenPtr = len1;
> +		} else if (rPx == 1) {
> +			rLenPtr = len1;
> +		} else {
> +			rLenPtr = lenFail;
> +			rFail = 1;
> +		}
> +
> +		*rLenPtr = -1;
> +		smp_store_release(px, rPx + 1);
> +
> +		spin_unlock(rb_lock);
> +
> +		smp_store_release(rLenPtr, 1);

I ran a test replacing the last 2 statements above with the following and it
still works:

                spin_unlock(rb_lock);
                WRITE_ONCE(*rLenPtr, 1);

Wouldn't you expect the test to catch an issue? The spin_unlock is already a
RELEASE barrier.

Suggestion: It is hard to review the patch because it is huge, it would be
good to split this up into 4 patches for each of the tests. But upto you :)

thanks,

 - Joel

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ