lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Jul 2020 18:39:03 +0100
From:   Al Viro <viro@...iv.linux.org.uk>
To:     David Laight <David.Laight@...lab.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>
Subject: Re: [PATCH 04/18] csum_and_copy_..._user(): pass 0xffffffff instead
 of 0 as initial sum

On Wed, Jul 22, 2020 at 04:17:02PM +0000, David Laight wrote:
> > David, do you *ever* bother to RTFS?  I mean, competent supercilious twits
> > are annoying, but at least with those you can generally assume that what
> > they say makes sense and has some relation to reality.  You, OTOH, keep
> > spewing utter bollocks, without ever lowering yourself to checking if your
> > guesses have anything to do with the reality.  With supercilious twit part
> > proudly on the display - you do speak with confidence, and the way you
> > dispense the oh-so-valuable advice to everyone around...
> 
> Yes, I do look at the code.
> I've actually spent a lot of time looking at the x86 checksum code.
> I've posted a patch for a version that is about twice as fast as the
> current one on a large range of x86 cpus.
> 
> Possibly I meant the 32bit reduction inside csum_add()
> rather than what csum_fold() does.

Really?
static inline unsigned add32_with_carry(unsigned a, unsigned b)
{  
        asm("addl %2,%0\n\t"
            "adcl $0,%0"
            : "=r" (a)
            : "0" (a), "rm" (b));
        return a;
}
static inline __wsum csum_add(__wsum csum, __wsum addend)
{
        return (__force __wsum)add32_with_carry((__force unsigned)csum,
                                                (__force unsigned)addend);
}

I would love to see your patch, anyway, along with the testcases and performance
comparison.

> Having worked on the internals of SYSV, NetBSD and Linux I probably
> forget the exact names for a few things.

That's usually dealt with by a few minutes with grep and vi...

> The brain can only hold so much information.

Bravo.  "I can't be arsed to check anything" spun into the claim of one's
superior experience.

What it means in practice is that your output is so much garbage that _might_
be untangled into something meaningful if the reader manages to guess the
substitutions.  Provided that the reconstruction won't not turn out to be
a composite of things applying to different versions of different kernels,
not being valid for any of those, that is...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ