[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240623134405.809025-7-sashal@kernel.org>
Date: Sun, 23 Jun 2024 09:43:40 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: Baokun Li <libaokun1@...wei.com>,
Hou Tao <houtao1@...wei.com>,
Jeff Layton <jlayton@...nel.org>,
Jia Zhu <zhujia.zj@...edance.com>,
Christian Brauner <brauner@...nel.org>,
Sasha Levin <sashal@...nel.org>,
dhowells@...hat.com,
netfs@...ts.linux.dev
Subject: [PATCH AUTOSEL 6.9 07/21] cachefiles: make on-demand read killable
From: Baokun Li <libaokun1@...wei.com>
[ Upstream commit bc9dde6155464e906e630a0a5c17a4cab241ffbb ]
Replacing wait_for_completion() with wait_for_completion_killable() in
cachefiles_ondemand_send_req() allows us to kill processes that might
trigger a hunk_task if the daemon is abnormal.
But now only CACHEFILES_OP_READ is killable, because OP_CLOSE and OP_OPEN
is initiated from kworker context and the signal is prohibited in these
kworker.
Note that when the req in xas changes, i.e. xas_load(&xas) != req, it
means that a process will complete the current request soon, so wait
again for the request to be completed.
In addition, add the cachefiles_ondemand_finish_req() helper function to
simplify the code.
Suggested-by: Hou Tao <houtao1@...wei.com>
Signed-off-by: Baokun Li <libaokun1@...wei.com>
Link: https://lore.kernel.org/r/20240522114308.2402121-13-libaokun@huaweicloud.com
Acked-by: Jeff Layton <jlayton@...nel.org>
Reviewed-by: Jia Zhu <zhujia.zj@...edance.com>
Signed-off-by: Christian Brauner <brauner@...nel.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
fs/cachefiles/ondemand.c | 40 ++++++++++++++++++++++++++++------------
1 file changed, 28 insertions(+), 12 deletions(-)
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index 922cab1a314b2..58bd80956c5a6 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -380,6 +380,20 @@ static struct cachefiles_req *cachefiles_ondemand_select_req(struct xa_state *xa
return NULL;
}
+static inline bool cachefiles_ondemand_finish_req(struct cachefiles_req *req,
+ struct xa_state *xas, int err)
+{
+ if (unlikely(!xas || !req))
+ return false;
+
+ if (xa_cmpxchg(xas->xa, xas->xa_index, req, NULL, 0) != req)
+ return false;
+
+ req->error = err;
+ complete(&req->done);
+ return true;
+}
+
ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
char __user *_buffer, size_t buflen)
{
@@ -443,16 +457,8 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache,
out:
cachefiles_put_object(req->object, cachefiles_obj_put_read_req);
/* Remove error request and CLOSE request has no reply */
- if (ret || msg->opcode == CACHEFILES_OP_CLOSE) {
- xas_reset(&xas);
- xas_lock(&xas);
- if (xas_load(&xas) == req) {
- req->error = ret;
- complete(&req->done);
- xas_store(&xas, NULL);
- }
- xas_unlock(&xas);
- }
+ if (ret || msg->opcode == CACHEFILES_OP_CLOSE)
+ cachefiles_ondemand_finish_req(req, &xas, ret);
cachefiles_req_put(req);
return ret ? ret : n;
}
@@ -544,8 +550,18 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object,
goto out;
wake_up_all(&cache->daemon_pollwq);
- wait_for_completion(&req->done);
- ret = req->error;
+wait:
+ ret = wait_for_completion_killable(&req->done);
+ if (!ret) {
+ ret = req->error;
+ } else {
+ ret = -EINTR;
+ if (!cachefiles_ondemand_finish_req(req, &xas, ret)) {
+ /* Someone will complete it soon. */
+ cpu_relax();
+ goto wait;
+ }
+ }
cachefiles_req_put(req);
return ret;
out:
--
2.43.0
Powered by blists - more mailing lists