[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201103181200.5h4pp4p3issazgpd@kafai-mbp.dhcp.thefacebook.com>
Date: Tue, 3 Nov 2020 10:12:00 -0800
From: Martin KaFai Lau <kafai@...com>
To: Alexander Duyck <alexander.duyck@...il.com>
CC: bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Kernel Team <kernel-team@...com>,
Netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>,
Lawrence Brakmo <brakmo@...com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
<alexanderduyck@...com>
Subject: Re: [bpf-next PATCH v2 5/5] selftest/bpf: Use global variables
instead of maps for test_tcpbpf_kern
On Tue, Nov 03, 2020 at 07:42:46AM -0800, Alexander Duyck wrote:
> On Mon, Nov 2, 2020 at 5:26 PM Martin KaFai Lau <kafai@...com> wrote:
> >
> > On Sat, Oct 31, 2020 at 11:52:37AM -0700, Alexander Duyck wrote:
> > [ ... ]
> >
> > > +struct tcpbpf_globals global = { 0 };
> > > int _version SEC("version") = 1;
> > >
> > > SEC("sockops")
> > > @@ -105,29 +72,15 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> > >
> > > op = (int) skops->op;
> > >
> > > - update_event_map(op);
> > > + global.event_map |= (1 << op);
> > >
> > > switch (op) {
> > > case BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB:
> > > /* Test failure to set largest cb flag (assumes not defined) */
> > > - bad_call_rv = bpf_sock_ops_cb_flags_set(skops, 0x80);
> > > + global.bad_cb_test_rv = bpf_sock_ops_cb_flags_set(skops, 0x80);
> > > /* Set callback */
> > > - good_call_rv = bpf_sock_ops_cb_flags_set(skops,
> > > + global.good_cb_test_rv = bpf_sock_ops_cb_flags_set(skops,
> > > BPF_SOCK_OPS_STATE_CB_FLAG);
> > > - /* Update results */
> > > - {
> > > - __u32 key = 0;
> > > - struct tcpbpf_globals g, *gp;
> > > -
> > > - gp = bpf_map_lookup_elem(&global_map, &key);
> > > - if (!gp)
> > > - break;
> > > - g = *gp;
> > > - g.bad_cb_test_rv = bad_call_rv;
> > > - g.good_cb_test_rv = good_call_rv;
> > > - bpf_map_update_elem(&global_map, &key, &g,
> > > - BPF_ANY);
> > > - }
> > > break;
> > > case BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB:
> > > skops->sk_txhash = 0x12345f;
> > > @@ -143,10 +96,8 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> > >
> > > thdr = (struct tcphdr *)(header + offset);
> > > v = thdr->syn;
> > > - __u32 key = 1;
> > >
> > > - bpf_map_update_elem(&sockopt_results, &key, &v,
> > > - BPF_ANY);
> > > + global.tcp_saved_syn = v;
> > > }
> > > }
> > > break;
> > > @@ -156,25 +107,16 @@ int bpf_testcb(struct bpf_sock_ops *skops)
> > > break;
> > > case BPF_SOCK_OPS_STATE_CB:
> > > if (skops->args[1] == BPF_TCP_CLOSE) {
> > > - __u32 key = 0;
> > > - struct tcpbpf_globals g, *gp;
> > > -
> > > - gp = bpf_map_lookup_elem(&global_map, &key);
> > > - if (!gp)
> > > - break;
> > > - g = *gp;
> > > if (skops->args[0] == BPF_TCP_LISTEN) {
> > > - g.num_listen++;
> > > + global.num_listen++;
> > > } else {
> > > - g.total_retrans = skops->total_retrans;
> > > - g.data_segs_in = skops->data_segs_in;
> > > - g.data_segs_out = skops->data_segs_out;
> > > - g.bytes_received = skops->bytes_received;
> > > - g.bytes_acked = skops->bytes_acked;
> > > + global.total_retrans = skops->total_retrans;
> > > + global.data_segs_in = skops->data_segs_in;
> > > + global.data_segs_out = skops->data_segs_out;
> > > + global.bytes_received = skops->bytes_received;
> > > + global.bytes_acked = skops->bytes_acked;
> > > }
> > > - g.num_close_events++;
> > > - bpf_map_update_elem(&global_map, &key, &g,
> > > - BPF_ANY);
> > It is interesting that there is no race in the original "g.num_close_events++"
> > followed by the bpf_map_update_elem(). It seems quite fragile though.
>
> How would it race with the current code though? At this point we are
> controlling the sockets in a single thread. As such the close events
> should already be serialized shouldn't they? This may have been a
> problem with the old code, but even then it was only two sockets so I
> don't think there was much risk of them racing against each other
> since the two sockets were linked anyway.
>
> > > + global.num_close_events++;
> > There is __sync_fetch_and_add().
> >
> > not sure about the global.event_map though, may be use an individual
> > variable for each _CB. Thoughts?
>
> I think this may be overkill for what we actually need. Since we are
> closing the sockets in a single threaded application there isn't much
> risk of the sockets all racing against each other in the close is
> there?
Make sense.
Acked-by: Martin KaFai Lau <kafai@...com>
Powered by blists - more mailing lists