[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <61ae75487d445_c5bd20827@john.notmuch>
Date: Mon, 06 Dec 2021 12:40:40 -0800
From: John Fastabend <john.fastabend@...il.com>
To: Luca Boccassi <bluca@...ian.org>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Matteo Croce <mcroce@...ux.microsoft.com>
Cc: bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
KP Singh <kpsingh@...nel.org>,
Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
keyrings@...r.kernel.org,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Lorenzo Bianconi <lorenzo@...nel.org>
Subject: Re: [PATCH bpf-next 0/3] bpf: add signature
Luca Boccassi wrote:
cutting to just the relevant pieces here.
[...]
>
> > I'll give the outline of the argument here.
> >
> > I do not believe signing BPF instructions for real programs provides
> > much additional security. Given most real programs if the application
> > or loader is exploited at runtime we have all sorts of trouble. First
> > simply verifying the program doesn't prevent malicious use of the
> > program. If its in the network program this means DDOS, data
> > exfiltration,
> > mitm attacks, many other possibilities. If its enforcement program
> > most enforcement actions are programmed from this application so
> > system
> > security is lost already. If its observability application simply
> > drops/manipulates observations that it wants. I don't know of any
> > useful programs that exist in isolation without user space input
> > and output as a critical component. If its not a privileged user,
> > well it better not be doing anything critical anyways or disabled
> > outright for the security focused.
> >
> > Many critical programs can't be signed by the nature of the program.
> > Optimizing network app generates optimized code at runtime.
> > Observability
> > tools JIT the code on the fly, similarly enforcement tools will do
> > the
> > same. I think the power of being able to optimize JIT the code in
> > application and give to the kernel is something we will see more and
> > more of. Saying I'm only going to accept signed programs, for a
> > distribution or something other than niche use case, is non starter
> > IMO because it breaks so many real use cases. We should encourage
> > these optimizing use cases as I see it as critical to performance
> > and keeping overhead low.
> >
> > From a purely security standpoint I believe you are better off
> > defining characteristics an application is allowed to have. For
> > example allowed to probe kernel memory, make these helpers calls,
> > have this many instructions, use this much memory, this much cpu,
> > etc. This lets you sandbox a BPF application (both user space and
> > kernel side) much nicer than any signing will allow.
> >
> > If we want to 'sign' programs we should do that from a BPF program
> > directly where other metadata can be included in the policy. For
> > example having a hash of the program loaded along with the calls
> > made and process allows for rich policy decisions. I have other
> > use cases that need a hash/signature for data blobs, so its on
> > my todo list but not at the top yet. But, being able to verify
> > arbitrary blob of data from BPF feels like a useful operation to me
> > in general. The fact in your case its a set of eBPF insns and in
> > my case its some key in a network header shouldn't matter.
> >
> > The series as is, scanned commit descriptions, is going to break
> > lots of in-use-today programs if it was ever enabled. And
> > is not as flexible (can't support bpftrace, etc.) or powerful
> > (can't consider fine grained policy decisions) as above.
> >
> > Add a function we can hook after verify (or before up for
> > debate) and helpers to verify signatures and/or generate
> > hashes and we get a better more general solution. And it can
> > also solve your use case even if I believe its not useful and
> > may break many BPF users running bpftrace, libbpf, etc.
> >
> > Thanks,
> > John
>
> Hello John,
>
> Thank you for the summary, this is much clearer.
>
> First of all, I think there's some misunderstanding: this series does
> not enable optional signatures by default, and does not enable
> mandatory signatures by default either. So I don't see how it would
> break existing use cases as you are saying? Unless I'm missing
> something?
>
> There's a kconfig to enable optional signatures - if they are there,
> they are verified, if they are not present then nothing different
> happens. Unless I am missing something, this should be backward
> compatible. This kconfig would likely be enabled in most use cases,
> just like optionally signed kernel modules are.
Agree, without enforcement things should continue to work.
>
> Then there's a kconfig on top of that which makes signatures mandatory.
> I would not imagine this to be enabled in may cases, just in custom
> builds that have more stringent requirements. It certainly would not be
> enabled in generalist distros. Perhaps a more flexible way would be to
> introduce a sysctl, like fsverity has with
> 'fs.verity.require_signatures'? That would be just fine for our use
> case. Matteo can we do that instead in the next revision?
We want to manage this from BPF side directly. It looks
like policy decision and we have use cases that are not as
simple as yes/no with global switch. For example, in k8s world
this might be enabled via labels which are user specific per container
policy. e.g. lockdown some containers more strictly than others.
>
> Secondly, I understand that for your use case signing programs would
> not be the best approach. That's fine, and I'm glad you are working on
> an alternative that better fits your model, it will be very interesting
> to see how it looks like once implemented. But that model doesn't fit
> all cases. In our case at Microsoft, we absolutely want to be able to
> pre-define at build time a list of BPF programs that are allowed to be
> loaded, and reject anything else. Userspace processes in our case are
By building this into BPF you can get the 'reject anything else' policy
and I get the metadata + reject/accept from the same hook. Its
just your program can be very simple.
> mostly old and crufty c++ programs that can most likely be pwned by
> looking at them sideways, so they get locked down hard with multiple
> redundant layers and so on and so forth. But right now for BPF you only
> have a "can load BPF" or "cannot load BPF" knob, and that's it. This is
> not good enough: we need to be able to define a list of allowed
> payloads, and be able to enforce it, so when (not if) said processes do
> get tricked into loading something else, it will fail, despite having
Yikes, this is a bit scary from a sec point of view right? Are those
programs read-only maps or can the C++ program also write into the
maps and control plane. Assuming they do some critical functions then
you really shouldn't be trusting them to not do all sorts of other
horrible things. Anyways not too important to this discussion.
I'll just reiterate (I think you get it though) that simply signing
enforcement doesn't mean now BPF is safe. Further these programs
have very high privileges and can do all sorts of things to the
system. But, sure sig enforcement locks down one avenue of loading
bogus program.
> the capability of calling bpf(). Trying to define heuristics is also
> not good enough for us - creative malicious actors have a tendency to
> come up with ways to chain things that individually are allowed and
> benign, but combined in a way that you just couldn't foresee. It would
Sure, but I would argue some things can be very restrictive and
generally useful. For example, never allow kernel memory read could be
enforced from BPF side directly. Never allow pkt redirect, etc.
> certainly cover a lot of cases, but not all. A strictly pre-defined
> list of what is allowed to run and what is not is what we need for our
> case, so that we always know exactly what is going to run and what is
> not, and can deal with the consequences accordingly, without nasty
> surprises waiting around the corner. Now in my naive view the best way
> to achieve this is via signatures and certs, as it's a well-understood
> system, with processes already in place to revoke/rotate/etc, and it's
> already used for kmods. An alternative would be hard-coding hashes I
> guess, but that would be terribly inflexible.
Another option would be to load your programs at boot time, presumably
with trusted boot enabled and then lock down BPF completely. Then
ensure all your BPF 'programs' are read-only from user<->kernel
interface and this should start looking fairly close to what you
want and all programs are correct by root of trust back to
trusted boot. Would assume you know what programs to load at boot
though. May or may not be a big assumption depending on your env.
>
> Now in terms of _how_ the signatures are done and validated, I'm sure
> there are multiple ways, and if some are better than what this series
> implements, then that's not an issue, it can be reworked. But the core
> requirement for us is: offline pre-defined list of what is allowed to
> run and what is not, with ability for hard enforcement that cannot be
> bypassed. Yes, you lose some features like JIT and so on: we don't
> care, we don't need those for our use cases. If others have different
> needs that's fine, this is all intended to be optional, not mandatory.
> There are obviously trade-offs, as always when security is involved,
> and each user can decide what's best for them.
>
> Hope this makes sense. Thanks!
I think I understand your use case. When done as BPF helper you
can get the behavior you want with a one line BPF program
loaded at boot.
int verify_all(struct bpf_prog **prog) {
return verify_signature(prog->insn,
prog->len * sizeof(struct bpf_insn),
signature, KEYRING, BPF_SIGTYPE);
}
And I can write some more specific things as,
int verify_blobs(void data) {
int reject = verify_signature(data, data_len, sig, KEYRING, TYPE);
struct policy_key *key = map_get_key();
return policy(key, reject);
}
map_get_key() looks into some datastor with the policy likely using
'current' to dig something up. It doesn't just apply to BPF progs
we can use it on other executables more generally. And I get more
interesting use cases like, allowing 'tc' programs unsigned, but
requiring kernel memory reads to require signatures or any N
other policies that may have value. Or only allowing my dbg user
to run read-only programs, because the dbg maybe shouldn't ever
be writing into packets, etc. Driving least privilege use cases
in fine detail.
By making it a BPF program we side step the debate where the kernel
tries to get the 'right' policy for you, me, everyone now and in
the future. The only way I can see to do this without getting N
policies baked into the kernel and at M different hook points is via
a BPF helper.
Thanks,
John
Powered by blists - more mailing lists