lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180812174116.GA54491@rdna-mbp.dhcp.thefacebook.com>
Date:   Sun, 12 Aug 2018 10:41:17 -0700
From:   Andrey Ignatov <rdna@...com>
To:     Yonghong Song <yhs@...com>
CC:     <netdev@...r.kernel.org>, <ast@...nel.org>, <daniel@...earbox.net>,
        <tj@...nel.org>, <guro@...com>, <kernel-team@...com>
Subject: Re: [PATCH bpf-next 4/4] selftests/bpf: Selftest for
 bpf_skb_ancestor_cgroup_id

Yonghong Song <yhs@...com> [Sat, 2018-08-11 23:59 -0700]:
> 
> 
> On 8/10/18 10:35 PM, Andrey Ignatov wrote:
> > Add selftests for bpf_skb_ancestor_cgroup_id helper.
> > 
> > test_skb_cgroup_id.sh prepares testing interface and adds tc qdisc and
> > filter for it using BPF object compiled from test_skb_cgroup_id_kern.c
> > program.
> > 
> > BPF program in test_skb_cgroup_id_kern.c gets ancestor cgroup id using
> > the new helper at different levels of cgroup hierarchy that skb belongs
> > to, including root level and non-existing level, and saves it to the map
> > where the key is the level of corresponding cgroup and the value is its
> > id.
> > 
> > To trigger BPF program, user space program test_skb_cgroup_id_user is
> > run. It adds itself into testing cgroup and sends UDP datagram to
> > link-local multicast address of testing interface. Then it reads cgroup
> > ids saved in kernel for different levels from the BPF map and compares
> > them with those in user space. They must be equal for every level of
> > ancestry.
> > 
> > Example of run:
> >    # ./test_skb_cgroup_id.sh
> >    Wait for testing link-local IP to become available ... OK
> >    Note: 8 bytes struct bpf_elf_map fixup performed due to size mismatch!
> >    [PASS]
> 
> I am not able to run the test on my FC27 based VM with the latest bpf-next
> and the patch set.
> 
> [yhs@...alhost bpf]$ sudo ./test_skb_cgroup_id.sh
> Wait for testing link-local IP to become available .....ERROR: Timeout
> waiting for test IP to become available.
> [yhs@...alhost bpf]$
> 
> I am able to run test_sock_addr.sh successfully.
> $ sudo ./test_sock_addr.sh
> Wait for testing IPv4/IPv6 to become available .
> .. OK
> Test case: bind4: load prog with wrong expected attach type .. [PASS]
> Test case: bind4: attach prog with wrong attach type .. [PASS]
> ...
> Test case: sendmsg6: deny call .. [PASS]
> Summary: 27 PASSED, 0 FAILED
> 
> Maybe some issues in this addr ff02::1%${TEST_IF}?

Thank you for checking it Yonghong!

I was able to repro it on a host different from where I tested
initially.

The problem is ping fails immediately due to link-local IPv6 being
tentative and all MAX_PING_TRIES happen very fast, w/o a chance for the
IPv6 to pass DAD and become ready.

On my original VM IPv6 was becoming ready much faster so even those 5
tries w/o a sleep between them were enough.

The fix is very simple: add `sleep 1` between iterations so there there
is enough time for IPv6 to pass DAD.

I'll send v2 with the fix.


> > Signed-off-by: Andrey Ignatov <rdna@...com>
> > ---
> >   tools/testing/selftests/bpf/Makefile          |   9 +-
> >   .../selftests/bpf/test_skb_cgroup_id.sh       |  61 ++++++
> >   .../selftests/bpf/test_skb_cgroup_id_kern.c   |  47 +++++
> >   .../selftests/bpf/test_skb_cgroup_id_user.c   | 187 ++++++++++++++++++
> >   4 files changed, 301 insertions(+), 3 deletions(-)
> >   create mode 100755 tools/testing/selftests/bpf/test_skb_cgroup_id.sh
> >   create mode 100644 tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c
> >   create mode 100644 tools/testing/selftests/bpf/test_skb_cgroup_id_user.c
> > 
> > diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> > index daed162043c2..fff7fb1285fc 100644
> > --- a/tools/testing/selftests/bpf/Makefile
> > +++ b/tools/testing/selftests/bpf/Makefile
> > @@ -34,7 +34,8 @@ TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test
> >   	test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o test_tunnel_kern.o \
> >   	test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o \
> >   	test_lwt_seg6local.o sendmsg4_prog.o sendmsg6_prog.o test_lirc_mode2_kern.o \
> > -	get_cgroup_id_kern.o socket_cookie_prog.o test_select_reuseport_kern.o
> > +	get_cgroup_id_kern.o socket_cookie_prog.o test_select_reuseport_kern.o \
> > +	test_skb_cgroup_id_kern.o
> >   # Order correspond to 'make run_tests' order
> >   TEST_PROGS := test_kmod.sh \
> > @@ -45,10 +46,11 @@ TEST_PROGS := test_kmod.sh \
> >   	test_sock_addr.sh \
> >   	test_tunnel.sh \
> >   	test_lwt_seg6local.sh \
> > -	test_lirc_mode2.sh
> > +	test_lirc_mode2.sh \
> > +	test_skb_cgroup_id.sh
> >   # Compile but not part of 'make run_tests'
> > -TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr
> > +TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr test_skb_cgroup_id_user
> >   include ../lib.mk
> > @@ -59,6 +61,7 @@ $(TEST_GEN_PROGS): $(BPFOBJ)
> >   $(TEST_GEN_PROGS_EXTENDED): $(OUTPUT)/libbpf.a
> >   $(OUTPUT)/test_dev_cgroup: cgroup_helpers.c
> > +$(OUTPUT)/test_skb_cgroup_id_user: cgroup_helpers.c
> >   $(OUTPUT)/test_sock: cgroup_helpers.c
> >   $(OUTPUT)/test_sock_addr: cgroup_helpers.c
> >   $(OUTPUT)/test_socket_cookie: cgroup_helpers.c
> > diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id.sh b/tools/testing/selftests/bpf/test_skb_cgroup_id.sh
> > new file mode 100755
> > index 000000000000..b75e9b52f06f
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id.sh
> > @@ -0,0 +1,61 @@
> > +#!/bin/sh
> > +# SPDX-License-Identifier: GPL-2.0
> > +# Copyright (c) 2018 Facebook
> > +
> > +set -eu
> > +
> > +wait_for_ip()
> > +{
> > +	local _i
> > +	echo -n "Wait for testing link-local IP to become available "
> > +	for _i in $(seq ${MAX_PING_TRIES}); do
> > +		echo -n "."
> > +		if ping -6 -q -c 1 -W 1 ff02::1%${TEST_IF} >/dev/null 2>&1; then
> > +			echo " OK"
> > +			return
> > +		fi
> > +	done
> > +	echo 1>&2 "ERROR: Timeout waiting for test IP to become available."
> > +	exit 1
> > +}
> > +
> > +setup()
> > +{
> > +	# Create testing interfaces not to interfere with current environment.
> > +	ip link add dev ${TEST_IF} type veth peer name ${TEST_IF_PEER}
> > +	ip link set ${TEST_IF} up
> > +	ip link set ${TEST_IF_PEER} up
> > +
> > +	wait_for_ip
> > +
> > +	tc qdisc add dev ${TEST_IF} clsact
> > +	tc filter add dev ${TEST_IF} egress bpf obj ${BPF_PROG_OBJ} \
> > +		sec ${BPF_PROG_SECTION} da
> > +
> > +	BPF_PROG_ID=$(tc filter show dev ${TEST_IF} egress | \
> > +			awk '/ id / {sub(/.* id /, "", $0); print($1)}')
> > +}
> > +
> > +cleanup()
> > +{
> > +	ip link del ${TEST_IF} 2>/dev/null || :
> > +	ip link del ${TEST_IF_PEER} 2>/dev/null || :
> > +}
> > +
> > +main()
> > +{
> > +	trap cleanup EXIT 2 3 6 15
> > +	setup
> > +	${PROG} ${TEST_IF} ${BPF_PROG_ID}
> > +}
> > +
> > +DIR=$(dirname $0)
> > +TEST_IF="test_cgid_1"
> > +TEST_IF_PEER="test_cgid_2"
> > +MAX_PING_TRIES=5
> > +BPF_PROG_OBJ="${DIR}/test_skb_cgroup_id_kern.o"
> > +BPF_PROG_SECTION="cgroup_id_logger"
> > +BPF_PROG_ID=0
> > +PROG="${DIR}/test_skb_cgroup_id_user"
> > +
> > +main
> > diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c b/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c
> > new file mode 100644
> > index 000000000000..68cf9829f5a7
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c
> > @@ -0,0 +1,47 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (c) 2018 Facebook
> > +
> > +#include <linux/bpf.h>
> > +#include <linux/pkt_cls.h>
> > +
> > +#include <string.h>
> > +
> > +#include "bpf_helpers.h"
> > +
> > +#define NUM_CGROUP_LEVELS	4
> > +
> > +struct bpf_map_def SEC("maps") cgroup_ids = {
> > +	.type = BPF_MAP_TYPE_ARRAY,
> > +	.key_size = sizeof(__u32),
> > +	.value_size = sizeof(__u64),
> > +	.max_entries = NUM_CGROUP_LEVELS,
> > +};
> > +
> > +static __always_inline void log_nth_level(struct __sk_buff *skb, __u32 level)
> > +{
> > +	__u64 id;
> > +
> > +	/* [1] &level passed to external function that may change it, it's
> > +	 *     incompatible with loop unroll.
> > +	 */
> > +	id = bpf_skb_ancestor_cgroup_id(skb, level);
> > +	bpf_map_update_elem(&cgroup_ids, &level, &id, 0);
> > +}
> > +
> > +SEC("cgroup_id_logger")
> > +int log_cgroup_id(struct __sk_buff *skb)
> > +{
> > +	/* Loop unroll can't be used here due to [1]. Unrolling manually.
> > +	 * Number of calls should be in sync with NUM_CGROUP_LEVELS.
> > +	 */
> > +	log_nth_level(skb, 0);
> > +	log_nth_level(skb, 1);
> > +	log_nth_level(skb, 2);
> > +	log_nth_level(skb, 3);
> > +
> > +	return TC_ACT_OK;
> > +}
> > +
> > +int _version SEC("version") = 1;
> > +
> > +char _license[] SEC("license") = "GPL";
> > diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c b/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c
> > new file mode 100644
> > index 000000000000..c121cc59f314
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c
> > @@ -0,0 +1,187 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (c) 2018 Facebook
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +#include <unistd.h>
> > +
> > +#include <arpa/inet.h>
> > +#include <net/if.h>
> > +#include <netinet/in.h>
> > +#include <sys/socket.h>
> > +#include <sys/types.h>
> > +
> > +
> > +#include <bpf/bpf.h>
> > +#include <bpf/libbpf.h>
> > +
> > +#include "bpf_rlimit.h"
> > +#include "cgroup_helpers.h"
> > +
> > +#define CGROUP_PATH		"/skb_cgroup_test"
> > +#define NUM_CGROUP_LEVELS	4
> > +
> > +/* RFC 4291, Section 2.7.1 */
> > +#define LINKLOCAL_MULTICAST	"ff02::1"
> > +
> > +static int mk_dst_addr(const char *ip, const char *iface,
> > +		       struct sockaddr_in6 *dst)
> > +{
> > +	memset(dst, 0, sizeof(*dst));
> > +
> > +	dst->sin6_family = AF_INET6;
> > +	dst->sin6_port = htons(1025);
> > +
> > +	if (inet_pton(AF_INET6, ip, &dst->sin6_addr) != 1) {
> > +		log_err("Invalid IPv6: %s", ip);
> > +		return -1;
> > +	}
> > +
> > +	dst->sin6_scope_id = if_nametoindex(iface);
> > +	if (!dst->sin6_scope_id) {
> > +		log_err("Failed to get index of iface: %s", iface);
> > +		return -1;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int send_packet(const char *iface)
> > +{
> > +	struct sockaddr_in6 dst;
> > +	char msg[] = "msg";
> > +	int err = 0;
> > +	int fd = -1;
> > +
> > +	if (mk_dst_addr(LINKLOCAL_MULTICAST, iface, &dst))
> > +		goto err;
> > +
> > +	fd = socket(AF_INET6, SOCK_DGRAM, 0);
> > +	if (fd == -1) {
> > +		log_err("Failed to create UDP socket");
> > +		goto err;
> > +	}
> > +
> > +	if (sendto(fd, &msg, sizeof(msg), 0, (const struct sockaddr *)&dst,
> > +		   sizeof(dst)) == -1) {
> > +		log_err("Failed to send datagram");
> > +		goto err;
> > +	}
> > +
> > +	goto out;
> > +err:
> > +	err = -1;
> > +out:
> > +	if (fd >= 0)
> > +		close(fd);
> > +	return err;
> > +}
> > +
> > +int get_map_fd_by_prog_id(int prog_id)
> > +{
> > +	struct bpf_prog_info info = {};
> > +	__u32 info_len = sizeof(info);
> > +	__u32 map_ids[1];
> > +	int prog_fd = -1;
> > +	int map_fd = -1;
> > +
> > +	prog_fd = bpf_prog_get_fd_by_id(prog_id);
> > +	if (prog_fd < 0) {
> > +		log_err("Failed to get fd by prog id %d", prog_id);
> > +		goto err;
> > +	}
> > +
> > +	info.nr_map_ids = 1;
> > +	info.map_ids = (__u64) (unsigned long) map_ids;
> > +
> > +	if (bpf_obj_get_info_by_fd(prog_fd, &info, &info_len)) {
> > +		log_err("Failed to get info by prog fd %d", prog_fd);
> > +		goto err;
> > +	}
> > +
> > +	if (!info.nr_map_ids) {
> > +		log_err("No maps found for prog fd %d", prog_fd);
> > +		goto err;
> > +	}
> > +
> > +	map_fd = bpf_map_get_fd_by_id(map_ids[0]);
> > +	if (map_fd < 0)
> > +		log_err("Failed to get fd by map id %d", map_ids[0]);
> > +err:
> > +	if (prog_fd >= 0)
> > +		close(prog_fd);
> > +	return map_fd;
> > +}
> > +
> > +int check_ancestor_cgroup_ids(int prog_id)
> > +{
> > +	__u64 actual_ids[NUM_CGROUP_LEVELS], expected_ids[NUM_CGROUP_LEVELS];
> > +	__u32 level;
> > +	int err = 0;
> > +	int map_fd;
> > +
> > +	expected_ids[0] = 0x100000001;	/* root cgroup */
> > +	expected_ids[1] = get_cgroup_id("");
> > +	expected_ids[2] = get_cgroup_id(CGROUP_PATH);
> > +	expected_ids[3] = 0; /* non-existent cgroup */
> > +
> > +	map_fd = get_map_fd_by_prog_id(prog_id);
> > +	if (map_fd < 0)
> > +		goto err;
> > +
> > +	for (level = 0; level < NUM_CGROUP_LEVELS; ++level) {
> > +		if (bpf_map_lookup_elem(map_fd, &level, &actual_ids[level])) {
> > +			log_err("Failed to lookup key %d", level);
> > +			goto err;
> > +		}
> > +		if (actual_ids[level] != expected_ids[level]) {
> > +			log_err("%llx (actual) != %llx (expected), level: %u\n",
> > +				actual_ids[level], expected_ids[level], level);
> > +			goto err;
> > +		}
> > +	}
> > +
> > +	goto out;
> > +err:
> > +	err = -1;
> > +out:
> > +	if (map_fd >= 0)
> > +		close(map_fd);
> > +	return err;
> > +}
> > +
> > +int main(int argc, char **argv)
> > +{
> > +	int cgfd = -1;
> > +	int err = 0;
> > +
> > +	if (argc < 3) {
> > +		fprintf(stderr, "Usage: %s iface prog_id\n", argv[0]);
> > +		exit(EXIT_FAILURE);
> > +	}
> > +
> > +	if (setup_cgroup_environment())
> > +		goto err;
> > +
> > +	cgfd = create_and_get_cgroup(CGROUP_PATH);
> > +	if (!cgfd)
> > +		goto err;
> > +
> > +	if (join_cgroup(CGROUP_PATH))
> > +		goto err;
> > +
> > +	if (send_packet(argv[1]))
> > +		goto err;
> > +
> > +	if (check_ancestor_cgroup_ids(atoi(argv[2])))
> > +		goto err;
> > +
> > +	goto out;
> > +err:
> > +	err = -1;
> > +out:
> > +	close(cgfd);
> > +	cleanup_cgroup_environment();
> > +	printf("[%s]\n", err ? "FAIL" : "PASS");
> > +	return err;
> > +}
> > 

-- 
Andrey Ignatov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ