lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1435351184-19158-6-git-send-email-dmitry.kalinkin@gmail.com>
Date:	Fri, 26 Jun 2015 23:39:40 +0300
From:	Dmitry Kalinkin <dmitry.kalinkin@...il.com>
To:	linux-kernel@...r.kernel.org, devel@...verdev.osuosl.org
Cc:	Martyn Welch <martyn.welch@...com>,
	Manohar Vanga <manohar.vanga@...il.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Dmitry Kalinkin <dmitry.kalinkin@...il.com>
Subject: [PATCHv3 5/9] staging: vme_user: allow large read()/write()

This changes large master transfers to do shorter read/write rather than
return -EINVAL. User space will now be able to optimistically request a
large transfer and get at least some data.

This also removes comments suggesting on how to implement large
transfers. Current vme_master_* read and write implementations use CPU
copies that don't produce burst PCI accesses and subsequently no block
transfer on VME bus. In the end overall performance is quiet low and it
can't be fixed by doing direct copy to user space. Much easier solution
would be to just reuse kernel buffer.

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@...il.com>
---
 drivers/staging/vme/devices/vme_user.c | 73 +++++++++++-----------------------
 1 file changed, 24 insertions(+), 49 deletions(-)

diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
index 3467cde..a2345db 100644
--- a/drivers/staging/vme/devices/vme_user.c
+++ b/drivers/staging/vme/devices/vme_user.c
@@ -120,75 +120,50 @@ struct vme_user_vma_priv {
 	atomic_t refcnt;
 };
 
-/*
- * We are going ot alloc a page during init per window for small transfers.
- * Small transfers will go VME -> buffer -> user space. Larger (more than a
- * page) transfers will lock the user space buffer into memory and then
- * transfer the data directly into the user space buffers.
- */
 static ssize_t resource_to_user(int minor, char __user *buf, size_t count,
 				loff_t *ppos)
 {
 	ssize_t retval;
 	ssize_t copied = 0;
 
-	if (count <= image[minor].size_buf) {
-		/* We copy to kernel buffer */
-		copied = vme_master_read(image[minor].resource,
-			image[minor].kern_buf, count, *ppos);
-		if (copied < 0)
-			return (int)copied;
-
-		retval = __copy_to_user(buf, image[minor].kern_buf,
-			(unsigned long)copied);
-		if (retval != 0) {
-			copied = (copied - retval);
-			pr_info("User copy failed\n");
-			return -EINVAL;
-		}
+	if (count > image[minor].size_buf)
+		count = image[minor].size_buf;
 
-	} else {
-		/* XXX Need to write this */
-		pr_info("Currently don't support large transfers\n");
-		/* Map in pages from userspace */
+	/* We copy to kernel buffer */
+	copied = vme_master_read(image[minor].resource, image[minor].kern_buf,
+				 count, *ppos);
+	if (copied < 0)
+		return (int)copied;
 
-		/* Call vme_master_read to do the transfer */
+	retval = __copy_to_user(buf, image[minor].kern_buf,
+				(unsigned long)copied);
+	if (retval != 0) {
+		copied = (copied - retval);
+		pr_info("User copy failed\n");
 		return -EINVAL;
 	}
 
 	return copied;
 }
 
-/*
- * We are going to alloc a page during init per window for small transfers.
- * Small transfers will go user space -> buffer -> VME. Larger (more than a
- * page) transfers will lock the user space buffer into memory and then
- * transfer the data directly from the user space buffers out to VME.
- */
 static ssize_t resource_from_user(unsigned int minor, const char __user *buf,
 				  size_t count, loff_t *ppos)
 {
 	ssize_t retval;
 	ssize_t copied = 0;
 
-	if (count <= image[minor].size_buf) {
-		retval = __copy_from_user(image[minor].kern_buf, buf,
-			(unsigned long)count);
-		if (retval != 0)
-			copied = (copied - retval);
-		else
-			copied = count;
-
-		copied = vme_master_write(image[minor].resource,
-			image[minor].kern_buf, copied, *ppos);
-	} else {
-		/* XXX Need to write this */
-		pr_info("Currently don't support large transfers\n");
-		/* Map in pages from userspace */
-
-		/* Call vme_master_write to do the transfer */
-		return -EINVAL;
-	}
+	if (count > image[minor].size_buf)
+		count = image[minor].size_buf;
+
+	retval = __copy_from_user(image[minor].kern_buf, buf,
+				  (unsigned long)count);
+	if (retval != 0)
+		copied = (copied - retval);
+	else
+		copied = count;
+
+	copied = vme_master_write(image[minor].resource, image[minor].kern_buf,
+				  copied, *ppos);
 
 	return copied;
 }
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ