[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <157601267151.12904.7408818232910113434.stgit@tstruk-mobl1>
Date: Tue, 10 Dec 2019 13:17:51 -0800
From: Tadeusz Struk <tadeusz.struk@...el.com>
To: jarkko.sakkinen@...ux.intel.com
Cc: tadeusz.struk@...el.com, peterz@...radead.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org, jgg@...pe.ca,
mingo@...hat.com, jeffrin@...agiritech.edu.in,
linux-integrity@...r.kernel.org, will@...nel.org, peterhuewe@....de
Subject: [PATCH] tpm: fix WARNING: lock held when returning to user space
When an application sends TPM commands in NONBLOCKING mode
the driver holds chip->tpm_mutex returning from write(),
which triggers WARNING: lock held when returning to user space!
To silence this warning the driver needs to release the mutex
and acquire it again in tpm_dev_async_work() before sending
the command.
Cc: stable@...r.kernel.org
Fixes: 9e1b74a63f776 (tpm: add support for nonblocking operation)
Signed-off-by: Tadeusz Struk <tadeusz.struk@...el.com>
---
drivers/char/tpm/tpm-dev-common.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
index 2ec47a69a2a6..b23b0b999232 100644
--- a/drivers/char/tpm/tpm-dev-common.c
+++ b/drivers/char/tpm/tpm-dev-common.c
@@ -61,6 +61,12 @@ static void tpm_dev_async_work(struct work_struct *work)
mutex_lock(&priv->buffer_mutex);
priv->command_enqueued = false;
+ ret = tpm_try_get_ops(priv->chip);
+ if (ret) {
+ priv->response_length = ret;
+ goto out;
+ }
+
ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
sizeof(priv->data_buffer));
tpm_put_ops(priv->chip);
@@ -68,6 +74,7 @@ static void tpm_dev_async_work(struct work_struct *work)
priv->response_length = ret;
mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
}
+out:
mutex_unlock(&priv->buffer_mutex);
wake_up_interruptible(&priv->async_wait);
}
@@ -204,6 +211,7 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf,
if (file->f_flags & O_NONBLOCK) {
priv->command_enqueued = true;
queue_work(tpm_dev_wq, &priv->async_work);
+ tpm_put_ops(priv->chip);
mutex_unlock(&priv->buffer_mutex);
return size;
}
Powered by blists - more mailing lists