[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260210232949.3770644-1-cmllamas@google.com>
Date: Tue, 10 Feb 2026 23:28:20 +0000
From: Carlos Llamas <cmllamas@...gle.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Arve Hjønnevåg" <arve@...roid.com>, Todd Kjos <tkjos@...roid.com>,
Christian Brauner <brauner@...nel.org>, Carlos Llamas <cmllamas@...gle.com>,
Alice Ryhl <aliceryhl@...gle.com>, Wedson Almeida Filho <wedsonaf@...il.com>,
Matt Gilbride <mattgilbride@...gle.com>, Paul Moore <paul@...l-moore.com>,
Vitaly Wool <vitaly.wool@...sulko.se>, Miguel Ojeda <ojeda@...nel.org>
Cc: kernel-team@...roid.com, linux-kernel@...r.kernel.org,
Tiffany Yang <ynaffit@...gle.com>, stable@...r.kernel.org
Subject: [PATCH] rust_binder: fix oneway spam detection
The spam detection logic in TreeRange was executed before the current
request was inserted into the tree. So the new request was not being
factored in the spam calculation. Fix this by moving the logic after
the new range has been inserted.
Also, the detection logic for ArrayRange was missing altogether which
meant large spamming transactions could get away without being detected.
Fix this by implementing an equivalent low_oneway_space() in ArrayRange.
Note that I looked into centralizing this logic in RangeAllocator but
iterating through 'state' and 'size' got a bit too complicated (for me)
and I abandoned this effort.
Cc: stable@...r.kernel.org
Cc: Alice Ryhl <aliceryhl@...gle.com>
Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver")
Signed-off-by: Carlos Llamas <cmllamas@...gle.com>
---
drivers/android/binder/range_alloc/array.rs | 35 +++++++++++++++++++--
drivers/android/binder/range_alloc/mod.rs | 4 +--
drivers/android/binder/range_alloc/tree.rs | 18 +++++------
3 files changed, 44 insertions(+), 13 deletions(-)
diff --git a/drivers/android/binder/range_alloc/array.rs b/drivers/android/binder/range_alloc/array.rs
index 07e1dec2ce63..ada1d1b4302e 100644
--- a/drivers/android/binder/range_alloc/array.rs
+++ b/drivers/android/binder/range_alloc/array.rs
@@ -118,7 +118,7 @@ pub(crate) fn reserve_new(
size: usize,
is_oneway: bool,
pid: Pid,
- ) -> Result<usize> {
+ ) -> Result<(usize, bool)> {
// Compute new value of free_oneway_space, which is set only on success.
let new_oneway_space = if is_oneway {
match self.free_oneway_space.checked_sub(size) {
@@ -146,7 +146,38 @@ pub(crate) fn reserve_new(
.ok()
.unwrap();
- Ok(insert_at_offset)
+ // Start detecting spammers once we have less than 20%
+ // of async space left (which is less than 10% of total
+ // buffer size).
+ //
+ // (This will short-circuit, so `low_oneway_space` is
+ // only called when necessary.)
+ let oneway_spam_detected =
+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
+
+ Ok((insert_at_offset, oneway_spam_detected))
+ }
+
+ /// Find the amount and size of buffers allocated by the current caller.
+ ///
+ /// The idea is that once we cross the threshold, whoever is responsible
+ /// for the low async space is likely to try to send another async transaction,
+ /// and at some point we'll catch them in the act. This is more efficient
+ /// than keeping a map per pid.
+ fn low_oneway_space(&self, calling_pid: Pid) -> bool {
+ let mut total_alloc_size = 0;
+ let mut num_buffers = 0;
+
+ // Warn if this pid has more than 50 transactions, or more than 50% of
+ // async space (which is 25% of total buffer size). Oneway spam is only
+ // detected when the threshold is exceeded.
+ for range in &self.ranges {
+ if range.state.is_oneway() && range.state.pid() == calling_pid {
+ total_alloc_size += range.size;
+ num_buffers += 1;
+ }
+ }
+ num_buffers > 50 || total_alloc_size > self.size / 4
}
pub(crate) fn reservation_abort(&mut self, offset: usize) -> Result<FreedRange> {
diff --git a/drivers/android/binder/range_alloc/mod.rs b/drivers/android/binder/range_alloc/mod.rs
index 2301e2bc1a1f..1f4734468ff1 100644
--- a/drivers/android/binder/range_alloc/mod.rs
+++ b/drivers/android/binder/range_alloc/mod.rs
@@ -188,11 +188,11 @@ pub(crate) fn reserve_new(&mut self, mut args: ReserveNewArgs<T>) -> Result<Rese
self.reserve_new(args)
}
Impl::Array(array) => {
- let offset =
+ let (offset, oneway_spam_detected) =
array.reserve_new(args.debug_id, args.size, args.is_oneway, args.pid)?;
Ok(ReserveNew::Success(ReserveNewSuccess {
offset,
- oneway_spam_detected: false,
+ oneway_spam_detected,
_empty_array_alloc: args.empty_array_alloc,
_new_tree_alloc: args.new_tree_alloc,
_tree_alloc: args.tree_alloc,
diff --git a/drivers/android/binder/range_alloc/tree.rs b/drivers/android/binder/range_alloc/tree.rs
index 838fdd2b47ea..48796fcdb362 100644
--- a/drivers/android/binder/range_alloc/tree.rs
+++ b/drivers/android/binder/range_alloc/tree.rs
@@ -164,15 +164,6 @@ pub(crate) fn reserve_new(
self.free_oneway_space
};
- // Start detecting spammers once we have less than 20%
- // of async space left (which is less than 10% of total
- // buffer size).
- //
- // (This will short-circut, so `low_oneway_space` is
- // only called when necessary.)
- let oneway_spam_detected =
- is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
-
let (found_size, found_off, tree_node, free_tree_node) = match self.find_best_match(size) {
None => {
pr_warn!("ENOSPC from range_alloc.reserve_new - size: {}", size);
@@ -203,6 +194,15 @@ pub(crate) fn reserve_new(
self.free_tree.insert(free_tree_node);
}
+ // Start detecting spammers once we have less than 20%
+ // of async space left (which is less than 10% of total
+ // buffer size).
+ //
+ // (This will short-circuit, so `low_oneway_space` is
+ // only called when necessary.)
+ let oneway_spam_detected =
+ is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
+
Ok((found_off, oneway_spam_detected))
}
--
2.53.0.239.g8d8fc8a987-goog
Powered by blists - more mailing lists