lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKH8qBuMMuuUJiZJY8Gb+tMQLKoRGpvv58sSM4sZXjyEc0i7dA@mail.gmail.com>
Date:   Mon, 11 Apr 2022 11:46:20 -0700
From:   Stanislav Fomichev <sdf@...gle.com>
To:     Martin KaFai Lau <kafai@...com>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org, ast@...nel.org,
        daniel@...earbox.net, andrii@...nel.org
Subject: Re: [PATCH bpf-next v3 3/7] bpf: minimize number of allocated lsm
 slots per program

On Fri, Apr 8, 2022 at 3:57 PM Martin KaFai Lau <kafai@...com> wrote:
>
> On Thu, Apr 07, 2022 at 03:31:08PM -0700, Stanislav Fomichev wrote:
> > Previous patch adds 1:1 mapping between all 211 LSM hooks
> > and bpf_cgroup program array. Instead of reserving a slot per
> > possible hook, reserve 10 slots per cgroup for lsm programs.
> > Those slots are dynamically allocated on demand and reclaimed.
> > This still adds some bloat to the cgroup and brings us back to
> > roughly pre-cgroup_bpf_attach_type times.
> >
> > It should be possible to eventually extend this idea to all hooks if
> > the memory consumption is unacceptable and shrink overall effective
> > programs array.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
> > ---
> >  include/linux/bpf-cgroup-defs.h |  4 +-
> >  include/linux/bpf_lsm.h         |  6 ---
> >  kernel/bpf/bpf_lsm.c            |  9 ++--
> >  kernel/bpf/cgroup.c             | 96 ++++++++++++++++++++++++++++-----
> >  4 files changed, 90 insertions(+), 25 deletions(-)
> >
> > diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
> > index 6c661b4df9fa..d42516e86b3a 100644
> > --- a/include/linux/bpf-cgroup-defs.h
> > +++ b/include/linux/bpf-cgroup-defs.h
> > @@ -10,7 +10,9 @@
> >
> >  struct bpf_prog_array;
> >
> > -#define CGROUP_LSM_NUM 211 /* will be addressed in the next patch */
> > +/* Maximum number of concurrently attachable per-cgroup LSM hooks.
> > + */
> > +#define CGROUP_LSM_NUM 10
> hmm...only 10 different lsm hooks (or 10 different attach_btf_ids) can
> have BPF_LSM_CGROUP programs attached.  This feels quite limited but having
> a static 211 (and potentially growing in the future) is not good either.
> I currently do not have a better idea also. :/
>
> Have you thought about other dynamic schemes or they would be too slow ?
>
> >  enum cgroup_bpf_attach_type {
> >       CGROUP_BPF_ATTACH_TYPE_INVALID = -1,
> > diff --git a/include/linux/bpf_lsm.h b/include/linux/bpf_lsm.h
> > index 7f0e59f5f9be..613de44aa429 100644
> > --- a/include/linux/bpf_lsm.h
> > +++ b/include/linux/bpf_lsm.h
> > @@ -43,7 +43,6 @@ extern const struct bpf_func_proto bpf_inode_storage_delete_proto;
> >  void bpf_inode_storage_free(struct inode *inode);
> >
> >  int bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog, bpf_func_t *bpf_func);
> > -int bpf_lsm_hook_idx(u32 btf_id);
> >
> >  #else /* !CONFIG_BPF_LSM */
> >
> > @@ -74,11 +73,6 @@ static inline int bpf_lsm_find_cgroup_shim(const struct bpf_prog *prog,
> >       return -ENOENT;
> >  }
> >
> > -static inline int bpf_lsm_hook_idx(u32 btf_id)
> > -{
> > -     return -EINVAL;
> > -}
> > -
> >  #endif /* CONFIG_BPF_LSM */
> >
> >  #endif /* _LINUX_BPF_LSM_H */
> > diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
> > index eca258ba71d8..8b948ec9ab73 100644
> > --- a/kernel/bpf/bpf_lsm.c
> > +++ b/kernel/bpf/bpf_lsm.c
> > @@ -57,10 +57,12 @@ static unsigned int __cgroup_bpf_run_lsm_socket(const void *ctx,
> >       if (unlikely(!sk))
> >               return 0;
> >
> > +     rcu_read_lock(); /* See bpf_lsm_attach_type_get(). */
> >       cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
> >       if (likely(cgrp))
> >               ret = BPF_PROG_RUN_ARRAY_CG(cgrp->bpf.effective[prog->aux->cgroup_atype],
> >                                           ctx, bpf_prog_run, 0);
> > +     rcu_read_unlock();
> >       return ret;
> >  }
> >
> > @@ -77,7 +79,7 @@ static unsigned int __cgroup_bpf_run_lsm_current(const void *ctx,
> >       /*prog = container_of(insn, struct bpf_prog, insnsi);*/
> >       prog = (const struct bpf_prog *)((void *)insn - offsetof(struct bpf_prog, insnsi));
> >
> > -     rcu_read_lock();
> > +     rcu_read_lock(); /* See bpf_lsm_attach_type_get(). */
> I think this is also needed for task_dfl_cgroup().  If yes,
> will be a good idea to adjust the comment if it ends up
> using the 'CGROUP_LSM_NUM 10' scheme.
>
> While at rcu_read_lock(), have you thought about what major things are
> needed to make BPF_LSM_CGROUP sleepable ?
>
> The cgroup local storage could be one that require changes but it seems
> the cgroup local storage is not available to BPF_LSM_GROUP in this change set.
> The current use case doesn't need it?

No, I haven't thought about sleepable at all yet :-( But seems like
having that rcu lock here might be problematic if we want to sleep? In
this case, Jakub's suggestion seems better.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ