[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190820001805.241928-24-matthewgarrett@google.com>
Date: Mon, 19 Aug 2019 17:17:59 -0700
From: Matthew Garrett <matthewgarrett@...gle.com>
To: jmorris@...ei.org
Cc: linux-security-module@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
David Howells <dhowells@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Matthew Garrett <mjg59@...gle.com>,
Kees Cook <keescook@...omium.org>, netdev@...r.kernel.org,
Chun-Yi Lee <jlee@...e.com>,
Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH V40 23/29] bpf: Restrict bpf when kernel lockdown is in
confidentiality mode
From: David Howells <dhowells@...hat.com>
bpf_read() and bpf_read_str() could potentially be abused to (eg) allow
private keys in kernel memory to be leaked. Disable them if the kernel
has been locked down in confidentiality mode.
Suggested-by: Alexei Starovoitov <alexei.starovoitov@...il.com>
Signed-off-by: Matthew Garrett <mjg59@...gle.com>
Reviewed-by: Kees Cook <keescook@...omium.org>
cc: netdev@...r.kernel.org
cc: Chun-Yi Lee <jlee@...e.com>
cc: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Daniel Borkmann <daniel@...earbox.net>
Signed-off-by: James Morris <jmorris@...ei.org>
---
include/linux/security.h | 1 +
kernel/trace/bpf_trace.c | 10 ++++++++++
security/lockdown/lockdown.c | 1 +
3 files changed, 12 insertions(+)
diff --git a/include/linux/security.h b/include/linux/security.h
index 0b2529dbf0f4..e604f4c67f03 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -118,6 +118,7 @@ enum lockdown_reason {
LOCKDOWN_INTEGRITY_MAX,
LOCKDOWN_KCORE,
LOCKDOWN_KPROBES,
+ LOCKDOWN_BPF_READ,
LOCKDOWN_CONFIDENTIALITY_MAX,
};
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 1c9a4745e596..33a954c367f3 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -139,8 +139,13 @@ BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr)
{
int ret;
+ ret = security_locked_down(LOCKDOWN_BPF_READ);
+ if (ret < 0)
+ goto out;
+
ret = probe_kernel_read(dst, unsafe_ptr, size);
if (unlikely(ret < 0))
+out:
memset(dst, 0, size);
return ret;
@@ -566,6 +571,10 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size,
{
int ret;
+ ret = security_locked_down(LOCKDOWN_BPF_READ);
+ if (ret < 0)
+ goto out;
+
/*
* The strncpy_from_unsafe() call will likely not fill the entire
* buffer, but that's okay in this circumstance as we're probing
@@ -577,6 +586,7 @@ BPF_CALL_3(bpf_probe_read_str, void *, dst, u32, size,
*/
ret = strncpy_from_unsafe(dst, unsafe_ptr, size);
if (unlikely(ret < 0))
+out:
memset(dst, 0, size);
return ret;
diff --git a/security/lockdown/lockdown.c b/security/lockdown/lockdown.c
index 27b2cf51e443..2397772c56bd 100644
--- a/security/lockdown/lockdown.c
+++ b/security/lockdown/lockdown.c
@@ -33,6 +33,7 @@ static char *lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = {
[LOCKDOWN_INTEGRITY_MAX] = "integrity",
[LOCKDOWN_KCORE] = "/proc/kcore access",
[LOCKDOWN_KPROBES] = "use of kprobes",
+ [LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM",
[LOCKDOWN_CONFIDENTIALITY_MAX] = "confidentiality",
};
--
2.23.0.rc1.153.gdeed80330f-goog
Powered by blists - more mailing lists