lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Jan 2021 18:40:58 +0000
From:   Al Viro <viro@...iv.linux.org.uk>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     kernel test robot <oliver.sang@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        kernel test robot <lkp@...el.com>,
        "Huang, Ying" <ying.huang@...el.com>,
        Feng Tang <feng.tang@...el.com>, zhengjun.xing@...el.com
Subject: Re: [x86] d55564cfc2: will-it-scale.per_thread_ops -5.8% regression

On Thu, Jan 07, 2021 at 06:33:58PM +0000, Al Viro wrote:
> On Thu, Jan 07, 2021 at 09:43:54AM -0800, Linus Torvalds wrote:
> 
> > Before, it would do the whole CLAC/STAC dance inside that loop for
> > every entry (and with that commit d55564cfc22 it would be a function
> > call, of course).
> > 
> > Can you verify that this fixes the regression (and in fact I'd expect
> > it to improve that test-case)?
> 
> I'm not sure it's the best approach, TBH.  How about simply
>         for (walk = head; walk; ufds += walk->len, walk = walk->next) {
> 		if (copy_to_user(ufds, walk->entries,
> 				 walk->len * sizeof(struct pollfd))
> 			goto out_fds;
>         }
> in there?  It's both simpler (obviously matches the copyin side) and
> might very well be faster...

Something like

do_sys_poll(): do the wholesale copyout

Don't bother with patching up just one field - 16 bits out of each 64.
The amount of memory traffic is not going to be greater (might be
smaller, actually) and the loop in copy_to_user() is optimized for
bulk copy.

Signed-off-by: Al Viro <viro@...iv.linux.org.uk>
---
diff --git a/fs/select.c b/fs/select.c
index ebfebdfe5c69..288633053c7f 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -1011,12 +1011,9 @@ static int do_sys_poll(struct pollfd __user *ufds, unsigned int nfds,
 	fdcount = do_poll(head, &table, end_time);
 	poll_freewait(&table);
 
-	for (walk = head; walk; walk = walk->next) {
-		struct pollfd *fds = walk->entries;
-		int j;
-
-		for (j = 0; j < walk->len; j++, ufds++)
-			if (__put_user(fds[j].revents, &ufds->revents))
+	for (walk = head; walk; ufds += walk->len, walk = walk->next) {
+		if (copy_to_user(ufds, walk->entries,
+				 walk->len * sizeof(struct pollfd)))
 				goto out_fds;
   	}
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ