lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMB2axMTqkvqQ2Mw6K4RzBrGfhOWjZd3Py1WYybm=y2mYsGAjg@mail.gmail.com>
Date: Fri, 26 Jul 2024 15:45:39 -0700
From: Amery Hung <ameryhung@...il.com>
To: Kui-Feng Lee <sinquersw@...il.com>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, yangpeihao@...u.edu.cn, 
	daniel@...earbox.net, andrii@...nel.org, alexei.starovoitov@...il.com, 
	martin.lau@...nel.org, toke@...hat.com, jhs@...atatu.com, jiri@...nulli.us, 
	sdf@...gle.com, xiyou.wangcong@...il.com, yepeilin.cs@...il.com
Subject: Re: [RFC PATCH v9 03/11] bpf: Allow struct_ops prog to return
 referenced kptr

On Fri, Jul 26, 2024 at 11:22 AM Kui-Feng Lee <sinquersw@...il.com> wrote:
>
>
>
> On 7/24/24 13:44, Amery Hung wrote:
> > On Tue, Jul 23, 2024 at 10:36 PM Kui-Feng Lee <sinquersw@...il.com> wrote:
> >>
> >>
> >>
> >> On 7/14/24 10:51, Amery Hung wrote:
> >>> Allow a struct_ops program to return a referenced kptr if the struct_ops
> >>> operator has pointer to struct as the return type. To make sure the
> >>> returned pointer continues to be valid in the kernel, several
> >>> constraints are required:
> >>>
> >>> 1) The type of the pointer must matches the return type
> >>> 2) The pointer originally comes from the kernel (not locally allocated)
> >>> 3) The pointer is in its unmodified form
> >>>
> >>> In addition, since the first user, Qdisc_ops::dequeue, allows a NULL
> >>> pointer to be returned when there is no skb to be dequeued, we will allow
> >>> a scalar value with value equals to NULL to be returned.
> >>>
> >>> In the future when there is a struct_ops user that always expects a valid
> >>> pointer to be returned from an operator, we may extend tagging to the
> >>> return value. We can tell the verifier to only allow NULL pointer return
> >>> if the return value is tagged with MAY_BE_NULL.
> >>>
> >>> The check is split into two parts since check_reference_leak() happens
> >>> before check_return_code(). We first allow a reference object to leak
> >>> through return if it is in the return register and the type matches the
> >>> return type. Then, we check whether the pointer to-be-returned is valid in
> >>> check_return_code().
> >>>
> >>> Signed-off-by: Amery Hung <amery.hung@...edance.com>
> >>> ---
> >>>    kernel/bpf/verifier.c | 50 +++++++++++++++++++++++++++++++++++++++----
> >>>    1 file changed, 46 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >>> index f614ab283c37..e7f356098902 100644
> >>> --- a/kernel/bpf/verifier.c
> >>> +++ b/kernel/bpf/verifier.c
> >>> @@ -10188,16 +10188,36 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
> >>>
> >>>    static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit)
> >>>    {
> >>> +     enum bpf_prog_type type = resolve_prog_type(env->prog);
> >>> +     u32 regno = exception_exit ? BPF_REG_1 : BPF_REG_0;
> >>> +     struct bpf_reg_state *reg = reg_state(env, regno);
> >>>        struct bpf_func_state *state = cur_func(env);
> >>> +     const struct bpf_prog *prog = env->prog;
> >>> +     const struct btf_type *ret_type = NULL;
> >>>        bool refs_lingering = false;
> >>> +     struct btf *btf;
> >>>        int i;
> >>>
> >>>        if (!exception_exit && state->frameno && !state->in_callback_fn)
> >>>                return 0;
> >>>
> >>> +     if (type == BPF_PROG_TYPE_STRUCT_OPS &&
> >>> +         reg->type & PTR_TO_BTF_ID && reg->ref_obj_id) {
> >>> +             btf = bpf_prog_get_target_btf(prog);
> >>> +             ret_type = btf_type_by_id(btf, prog->aux->attach_func_proto->type);
> >>> +             if (reg->btf_id != ret_type->type) {
> >>> +                     verbose(env, "Return kptr type, struct %s, doesn't match function prototype, struct %s\n",
> >>> +                             btf_type_name(reg->btf, reg->btf_id),
> >>> +                             btf_type_name(btf, ret_type->type));
> >>> +                     return -EINVAL;
> >>> +             }
> >>> +     }
> >>> +
> >>>        for (i = 0; i < state->acquired_refs; i++) {
> >>>                if (!exception_exit && state->in_callback_fn && state->refs[i].callback_ref != state->frameno)
> >>>                        continue;
> >>> +             if (ret_type && reg->ref_obj_id == state->refs[i].id)
> >>> +                     continue;
> >>
> >> Is it possible having two kptrs that both are in the returned type
> >> passing into a function?
> >>
> >
> > Just to make sure I understand the question correctly: Are you asking
> > what would happen here if a struct_ops operator has the following
> > signature?
> >
> > struct *foo xxx_ops__dummy_op(struct foo *foo_a__ref, struct foo *foo_b__ref)
>
> Right! What would happen to this case? Could one of them leak without
> being detected?
>

There will be a ref_obj_id for foo_a and another one for foo_b when we
enter the program (patch 1). Then, in the for loop in
check_reference_leak(), reg->ref_obj_id should just match one of
those, and all others will still be viewed as reference leak.

> >
> >>
> >>>                verbose(env, "Unreleased reference id=%d alloc_insn=%d\n",
> >>>                        state->refs[i].id, state->refs[i].insn_idx);
> >>>                refs_lingering = true;
> >>> @@ -15677,12 +15697,15 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
> >>>        const char *exit_ctx = "At program exit";
> >>>        struct tnum enforce_attach_type_range = tnum_unknown;
> >>>        const struct bpf_prog *prog = env->prog;
> >>> -     struct bpf_reg_state *reg;
> >>> +     struct bpf_reg_state *reg = reg_state(env, regno);
> >>>        struct bpf_retval_range range = retval_range(0, 1);
> >>>        enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
> >>>        int err;
> >>>        struct bpf_func_state *frame = env->cur_state->frame[0];
> >>>        const bool is_subprog = frame->subprogno;
> >>> +     struct btf *btf = bpf_prog_get_target_btf(prog);
> >>> +     bool st_ops_ret_is_kptr = false;
> >>> +     const struct btf_type *t;
> >>>
> >>>        /* LSM and struct_ops func-ptr's return type could be "void" */
> >>>        if (!is_subprog || frame->in_exception_callback_fn) {
> >>> @@ -15691,10 +15714,26 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
> >>>                        if (prog->expected_attach_type == BPF_LSM_CGROUP)
> >>>                                /* See below, can be 0 or 0-1 depending on hook. */
> >>>                                break;
> >>> -                     fallthrough;
> >>> +                     if (!prog->aux->attach_func_proto->type)
> >>> +                             return 0;
> >>> +                     break;
> >>>                case BPF_PROG_TYPE_STRUCT_OPS:
> >>>                        if (!prog->aux->attach_func_proto->type)
> >>>                                return 0;
> >>> +
> >>> +                     t = btf_type_by_id(btf, prog->aux->attach_func_proto->type);
> >>> +                     if (btf_type_is_ptr(t)) {
> >>> +                             /* Allow struct_ops programs to return kptr or null if
> >>> +                              * the return type is a pointer type.
> >>> +                              * check_reference_leak has ensured the returning kptr
> >>> +                              * matches the type of the function prototype and is
> >>> +                              * the only leaking reference. Thus, we can safely return
> >>> +                              * if the pointer is in its unmodified form
> >>> +                              */
> >>> +                             if (reg->type & PTR_TO_BTF_ID)
> >>> +                                     return __check_ptr_off_reg(env, reg, regno, false);
> >>> +                             st_ops_ret_is_kptr = true;
> >>> +                     }
> >>>                        break;
> >>>                default:
> >>>                        break;
> >>> @@ -15716,8 +15755,6 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
> >>>                return -EACCES;
> >>>        }
> >>>
> >>> -     reg = cur_regs(env) + regno;
> >>> -
> >>>        if (frame->in_async_callback_fn) {
> >>>                /* enforce return zero from async callbacks like timer */
> >>>                exit_ctx = "At async callback return";
> >>> @@ -15804,6 +15841,11 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
> >>>        case BPF_PROG_TYPE_NETFILTER:
> >>>                range = retval_range(NF_DROP, NF_ACCEPT);
> >>>                break;
> >>> +     case BPF_PROG_TYPE_STRUCT_OPS:
> >>> +             if (!st_ops_ret_is_kptr)
> >>> +                     return 0;
> >>> +             range = retval_range(0, 0);
> >>> +             break;
> >>>        case BPF_PROG_TYPE_EXT:
> >>>                /* freplace program can return anything as its return value
> >>>                 * depends on the to-be-replaced kernel func or bpf program.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ