lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160210105843.GD23914@pd.tnic>
Date:	Wed, 10 Feb 2016 11:58:43 +0100
From:	Borislav Petkov <bp@...en8.de>
To:	"Luck, Tony" <tony.luck@...el.com>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andy Lutomirski <luto@...nel.org>,
	Dan Williams <dan.j.williams@...el.com>, elliott@....com,
	Brian Gerst <brgerst@...il.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, linux-nvdimm@...1.01.org, x86@...nel.org
Subject: Re: [PATCH v10 3/4] x86, mce: Add __mcsafe_copy()

On Tue, Feb 09, 2016 at 03:15:57PM -0800, Luck, Tony wrote:
> > You can save yourself this MOV here in what is, I'm assuming, the
> > general likely case where @src is aligned and do:
> > 
> >         /* check for bad alignment of source */
> >         testl $7, %esi
> >         /* already aligned? */
> >         jz 102f
> > 
> >         movl %esi,%ecx
> >         subl $8,%ecx
> >         negl %ecx
> >         subl %ecx,%edx
> > 0:      movb (%rsi),%al
> >         movb %al,(%rdi)
> >         incq %rsi
> >         incq %rdi
> >         decl %ecx
> >         jnz 0b
> 
> The "testl $7, %esi" just checks the low three bits ... it doesn't
> change %esi.  But the code from the "subl $8" on down assumes that
> %ecx is a number in [1..7] as the count of bytes to copy until we
> achieve alignment.

Grr, sorry about that, I actually missed to copy-paste the AND:

        /* check for bad alignment of source */
        testl $7, %esi
        jz 102f                         /* already aligned */

        movl %esi,%ecx
        andl $7,%ecx
        subl $8,%ecx
        negl %ecx
        subl %ecx,%edx
0:      movb (%rsi),%al
        movb %al,(%rdi)
        incq %rsi
        incq %rdi
        decl %ecx
        jnz 0b

I basically am proposing to move the unlikely case out of line and
optimize the likely one.

> So your "movl %esi,%ecx" needs to be somthing that just copies the
> low three bits and zeroes the high part of %ecx.  Is there a cute
> way to do that in x86 assembler?

We could do some funky games with byte-sized moves but those are
generally slower anyway so doing the default operand size thing should
be ok.

> I copied that loop from arch/x86/lib/copy_user_64.S:__copy_user_nocache()
> I guess the answer depends on whether you generally copy enough
> cache lines to save enough time to cover the cost of saving and
> restoring those registers.

Well, that function will run on modern hw with a stack engine so I'd
assume those 4 pushes and pops would be paid for by the increased
registers count for the data shuffling.

But one could take out that function do some microbenchmarking with
different sizes and once with the current version and once with the
pushes and pops of r1[2-5] to see where the breakeven is.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ