[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221024113052.348527964@linuxfoundation.org>
Date: Mon, 24 Oct 2022 13:28:25 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Hao Sun <sunhao.th@...il.com>,
Hou Tao <houtao1@...wei.com>,
Martin KaFai Lau <martin.lau@...nel.org>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.15 161/530] bpf: Propagate error from htab_lock_bucket() to userspace
From: Hou Tao <houtao1@...wei.com>
[ Upstream commit 66a7a92e4d0d091e79148a4c6ec15d1da65f4280 ]
In __htab_map_lookup_and_delete_batch() if htab_lock_bucket() returns
-EBUSY, it will go to next bucket. Going to next bucket may not only
skip the elements in current bucket silently, but also incur
out-of-bound memory access or expose kernel memory to userspace if
current bucket_cnt is greater than bucket_size or zero.
Fixing it by stopping batch operation and returning -EBUSY when
htab_lock_bucket() fails, and the application can retry or skip the busy
batch as needed.
Fixes: 20b6cc34ea74 ("bpf: Avoid hashtab deadlock with map_locked")
Reported-by: Hao Sun <sunhao.th@...il.com>
Signed-off-by: Hou Tao <houtao1@...wei.com>
Link: https://lore.kernel.org/r/20220831042629.130006-3-houtao@huaweicloud.com
Signed-off-by: Martin KaFai Lau <martin.lau@...nel.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/bpf/hashtab.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index cae858985f0c..e7f45a966e6b 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1671,8 +1671,11 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
/* do not grab the lock unless need it (bucket_cnt > 0). */
if (locked) {
ret = htab_lock_bucket(htab, b, batch, &flags);
- if (ret)
- goto next_batch;
+ if (ret) {
+ rcu_read_unlock();
+ bpf_enable_instrumentation();
+ goto after_loop;
+ }
}
bucket_cnt = 0;
--
2.35.1
Powered by blists - more mailing lists