[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180614132158.714204464@linuxfoundation.org>
Date: Thu, 14 Jun 2018 16:04:55 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Jan Glauber <jglauber@...ium.com>,
Robert Richter <rrichter@...ium.com>,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: [PATCH 4.14 32/36] crypto: cavium - Fix fallout from CONFIG_VMAP_STACK
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jan Glauber <jglauber@...ium.com>
commit 37ff02acaa3d7be87ecb89f198a549ffd3ae2403 upstream.
Enabling virtual mapped kernel stacks breaks the thunderx_zip
driver. On compression or decompression the executing CPU hangs
in an endless loop. The reason for this is the usage of __pa
by the driver which does no longer work for an address that is
not part of the 1:1 mapping.
The zip driver allocates a result struct on the stack and needs
to tell the hardware the physical address within this struct
that is used to signal the completion of the request.
As the hardware gets the wrong address after the broken __pa
conversion it writes to an arbitrary address. The zip driver then
waits forever for the completion byte to contain a non-zero value.
Allocating the result struct from 1:1 mapped memory resolves this
bug.
Signed-off-by: Jan Glauber <jglauber@...ium.com>
Reviewed-by: Robert Richter <rrichter@...ium.com>
Cc: stable <stable@...r.kernel.org> # 4.14
Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/crypto/cavium/zip/zip_crypto.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
--- a/drivers/crypto/cavium/zip/zip_crypto.c
+++ b/drivers/crypto/cavium/zip/zip_crypto.c
@@ -124,7 +124,7 @@ int zip_compress(const u8 *src, unsigned
struct zip_kernel_ctx *zip_ctx)
{
struct zip_operation *zip_ops = NULL;
- struct zip_state zip_state;
+ struct zip_state *zip_state;
struct zip_device *zip = NULL;
int ret;
@@ -135,20 +135,23 @@ int zip_compress(const u8 *src, unsigned
if (!zip)
return -ENODEV;
- memset(&zip_state, 0, sizeof(struct zip_state));
+ zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
+ if (!zip_state)
+ return -ENOMEM;
+
zip_ops = &zip_ctx->zip_comp;
zip_ops->input_len = slen;
zip_ops->output_len = *dlen;
memcpy(zip_ops->input, src, slen);
- ret = zip_deflate(zip_ops, &zip_state, zip);
+ ret = zip_deflate(zip_ops, zip_state, zip);
if (!ret) {
*dlen = zip_ops->output_len;
memcpy(dst, zip_ops->output, *dlen);
}
-
+ kfree(zip_state);
return ret;
}
@@ -157,7 +160,7 @@ int zip_decompress(const u8 *src, unsign
struct zip_kernel_ctx *zip_ctx)
{
struct zip_operation *zip_ops = NULL;
- struct zip_state zip_state;
+ struct zip_state *zip_state;
struct zip_device *zip = NULL;
int ret;
@@ -168,7 +171,10 @@ int zip_decompress(const u8 *src, unsign
if (!zip)
return -ENODEV;
- memset(&zip_state, 0, sizeof(struct zip_state));
+ zip_state = kzalloc(sizeof(*zip_state), GFP_ATOMIC);
+ if (!zip_state)
+ return -ENOMEM;
+
zip_ops = &zip_ctx->zip_decomp;
memcpy(zip_ops->input, src, slen);
@@ -179,13 +185,13 @@ int zip_decompress(const u8 *src, unsign
zip_ops->input_len = slen;
zip_ops->output_len = *dlen;
- ret = zip_inflate(zip_ops, &zip_state, zip);
+ ret = zip_inflate(zip_ops, zip_state, zip);
if (!ret) {
*dlen = zip_ops->output_len;
memcpy(dst, zip_ops->output, *dlen);
}
-
+ kfree(zip_state);
return ret;
}
Powered by blists - more mailing lists