[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKH8qBvQPGLqZySq7_ghPa+odGuOTSXBq1Mfa=Y_-sVCnArUBA@mail.gmail.com>
Date: Thu, 25 Apr 2019 14:31:37 -0700
From: Stanislav Fomichev <sdf@...gle.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: Netdev <netdev@...r.kernel.org>, bpf@...r.kernel.org,
David Miller <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Quentin Monnet <quentin.monnet@...ronome.com>,
Jann Horn <jannh@...gle.com>
Subject: Re: [PATCH bpf-next v4 1/2] bpf: support BPF_PROG_QUERY for
BPF_FLOW_DISSECTOR attach_type
On Thu, Apr 25, 2019 at 2:21 PM Daniel Borkmann <daniel@...earbox.net> wrote:
>
> On 04/24/2019 11:31 PM, Stanislav Fomichev wrote:
> > target_fd is target namespace. If there is a flow dissector BPF program
> > attached to that namespace, its (single) id is returned.
> >
> > v4:
> > * add missing put_net (Jann Horn)
> >
> > v3:
> > * add missing inline to skb_flow_dissector_prog_query static def
> > (kbuild test robot <lkp@...el.com>)
> >
> > v2:
> > * don't sleep in rcu critical section (Jakub Kicinski)
> > * check input prog_cnt (exit early)
> >
> > Cc: Jann Horn <jannh@...gle.com>
> > Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> > ---
> > include/linux/skbuff.h | 8 +++++++
> > kernel/bpf/syscall.c | 2 ++
> > net/core/flow_dissector.c | 46 +++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 56 insertions(+)
> >
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 998256c2820b..6d58fa8a65fd 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -1258,11 +1258,19 @@ void skb_flow_dissector_init(struct flow_dissector *flow_dissector,
> > unsigned int key_count);
> >
> > #ifdef CONFIG_NET
> > +int skb_flow_dissector_prog_query(const union bpf_attr *attr,
> > + union bpf_attr __user *uattr);
> > int skb_flow_dissector_bpf_prog_attach(const union bpf_attr *attr,
> > struct bpf_prog *prog);
> >
> > int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr);
> > #else
> > +static inline int skb_flow_dissector_prog_query(const union bpf_attr *attr,
> > + union bpf_attr __user *uattr)
> > +{
> > + return -EOPNOTSUPP;
> > +}
> > +
> > static inline int skb_flow_dissector_bpf_prog_attach(const union bpf_attr *attr,
> > struct bpf_prog *prog)
> > {
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index 92c9b8a32b50..b0de49598341 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -2009,6 +2009,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
> > break;
> > case BPF_LIRC_MODE2:
> > return lirc_prog_query(attr, uattr);
> > + case BPF_FLOW_DISSECTOR:
> > + return skb_flow_dissector_prog_query(attr, uattr);
> > default:
> > return -EINVAL;
> > }
> > diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> > index fac712cee9d5..27466e54ad3a 100644
> > --- a/net/core/flow_dissector.c
> > +++ b/net/core/flow_dissector.c
> > @@ -65,6 +65,52 @@ void skb_flow_dissector_init(struct flow_dissector *flow_dissector,
> > }
> > EXPORT_SYMBOL(skb_flow_dissector_init);
> >
> > +int skb_flow_dissector_prog_query(const union bpf_attr *attr,
> > + union bpf_attr __user *uattr)
> > +{
> > + __u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
> > + u32 prog_id, prog_cnt = 0, flags = 0;
> > + struct bpf_prog *attached;
> > + struct net *net;
> > + int ret = 0;
> > +
> > + if (attr->query.query_flags)
> > + return -EINVAL;
> > +
> > + net = get_net_ns_by_fd(attr->query.target_fd);
> > + if (IS_ERR(net))
> > + return PTR_ERR(net);
> > +
> > + rcu_read_lock();
> > + attached = rcu_dereference(net->flow_dissector_prog);
> > + if (attached) {
> > + prog_cnt = 1;
> > + prog_id = attached->aux->id;
> > + }
> > + rcu_read_unlock();
>
> Patch looks good to me, one small nit: is there any reason you didn't
> do the put_net(net) right after the rcu_read_unlock() above? Below it's
> not really needed anymore, so this would also simplify the error paths
> by being able to directly return in error case, no?
Yes, good point. I think I was probably mentally locked down into a pattern
where the refs are generally dropped at the end of a routine.
Dropping it right after rcu_unlock indeed looks much better. Will follow up
with another version, thank you for a suggestion!
> > + if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags))) {
> > + ret = -EFAULT;
> > + goto out;
> > + }
> > + if (copy_to_user(&uattr->query.prog_cnt, &prog_cnt, sizeof(prog_cnt))) {
> > + ret = -EFAULT;
> > + goto out;
> > + }
> > +
> > + if (!attr->query.prog_cnt || !prog_ids || !prog_cnt)
> > + goto out;
> > +
> > + if (copy_to_user(prog_ids, &prog_id, sizeof(u32))) {
> > + ret = -EFAULT;
> > + goto out;
> > + }
> > +
> > +out:
> > + put_net(net);
> > + return ret;
> > +}
> > +
> > int skb_flow_dissector_bpf_prog_attach(const union bpf_attr *attr,
> > struct bpf_prog *prog)
> > {
> >
>
Powered by blists - more mailing lists