lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG_fn=WvjRCNH+F65QuuCnrmLcicz1zu0s-uu8DrmUtr0tcZ7Q@mail.gmail.com>
Date:   Mon, 21 Mar 2022 14:17:44 +0100
From:   Alexander Potapenko <glider@...gle.com>
To:     Dmitry Vyukov <dvyukov@...gle.com>
Cc:     Alexander Viro <viro@...iv.linux.org.uk>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrey Konovalov <andreyknvl@...gle.com>,
        Andy Lutomirski <luto@...nel.org>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
        Christoph Hellwig <hch@....de>,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Eric Dumazet <edumazet@...gle.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Ilya Leoshkevich <iii@...ux.ibm.com>,
        Ingo Molnar <mingo@...hat.com>, Jens Axboe <axboe@...nel.dk>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Kees Cook <keescook@...omium.org>,
        Marco Elver <elver@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        Pekka Enberg <penberg@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Petr Mladek <pmladek@...e.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Vasily Gorbik <gor@...ux.ibm.com>,
        Vegard Nossum <vegard.nossum@...cle.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Linux-Arch <linux-arch@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 13/43] kmsan: add KMSAN runtime core

> > +       KMSAN_WARN_ON(!src_slots || !dst_slots);
> > +       KMSAN_WARN_ON((src_slots < 1) || (dst_slots < 1));
>
> The above 2 checks look equivalent.
Right, I'll drop the first one.

> > +       KMSAN_WARN_ON((src_slots - dst_slots > 1) ||
> > +                     (dst_slots - src_slots < -1));
> > +       backwards = dst > src;
> > +       i = backwards ? min(src_slots, dst_slots) - 1 : 0;
> > +       iter = backwards ? -1 : 1;
> > +
> > +       align_shadow_src =
> > +               (u32 *)ALIGN_DOWN((u64)shadow_src, KMSAN_ORIGIN_SIZE);
> > +       for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) {
> > +               KMSAN_WARN_ON(i < 0);
> > +               shadow = align_shadow_src[i];
> > +               if (i == 0) {
> > +                       /*
> > +                        * If |src| isn't aligned on KMSAN_ORIGIN_SIZE, don't
> > +                        * look at the first |src % KMSAN_ORIGIN_SIZE| bytes
> > +                        * of the first shadow slot.
> > +                        */
> > +                       skip_bits = ((u64)src % KMSAN_ORIGIN_SIZE) * 8;
> > +                       shadow = (shadow << skip_bits) >> skip_bits;
>
> Is this correct?...
> For the first slot we want to ignore some of the first (low) bits. To
> ignore low bits we need to shift right and then left, no?

Yes, you are right, I forgot about the endianness. Will try to add
some tests for this case.

> > +               }
> > +               if (i == src_slots - 1) {
> > +                       /*
> > +                        * If |src + n| isn't aligned on
> > +                        * KMSAN_ORIGIN_SIZE, don't look at the last
> > +                        * |(src + n) % KMSAN_ORIGIN_SIZE| bytes of the
> > +                        * last shadow slot.
> > +                        */
> > +                       skip_bits = (((u64)src + n) % KMSAN_ORIGIN_SIZE) * 8;
> > +                       shadow = (shadow >> skip_bits) << skip_bits;
>
> Same here.
Done
>


> This can be a bit shorted and w/o the temp var as:
>
> new_origin = kmsan_internal_chain_origin(old_origin);
> /*
> * kmsan_internal_chain_origin() may return
> * NULL, but we don't want to lose the previous
> * origin value.
> */
> if (!new_origin)
>    new_origin = old_origin;

Done.

>
>
> > +               }
> > +               if (shadow)
> > +                       origin_dst[i] = new_origin;
>
> Are we sure that origin_dst is aligned here?
Yes, kmsan_get_metadata(..., KMSAN_META_ORIGIN) always returns aligned pointers.



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

Diese E-Mail ist vertraulich. Falls Sie diese fälschlicherweise
erhalten haben sollten, leiten Sie diese bitte nicht an jemand anderes
weiter, löschen Sie alle Kopien und Anhänge davon und lassen Sie mich
bitte wissen, dass die E-Mail an die falsche Person gesendet wurde.


This e-mail is confidential. If you received this communication by
mistake, please don't forward it to anyone else, please erase all
copies and attachments, and please let me know that it has gone to the
wrong person.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ