[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150621065523.GA31829@gmail.com>
Date: Sun, 21 Jun 2015 08:55:23 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Alexey Dobriyan <adobriyan@...il.com>
Cc: hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Denys Vlasenko <dvlasenk@...hat.com>,
Andy Lutomirski <luto@...nel.org>,
Brian Gerst <brgerst@...il.com>
Subject: Re: [PATCH 2/2] x86: fix incomplete clear by clear_user()
* Alexey Dobriyan <adobriyan@...il.com> wrote:
> clear_user() used MOVQ+MOVB and if MOVQ faults, code simply exits and
> honestly returns remaining length. In case of unaligned area, unaligned
> remainder would count towards return value (correctly) but not cleared
> (lazy code at least):
>
> clear_user(p + 4096 - 4, 8) = 8
>
> No one would have noticed but REP MOVSB addition to clear_user()
> repertoire creates a problem: REP MOVSB does everything correctly,
> clears and counts to the last possible byte, but REP STOSQ and MOVQ
> variants DO NOT:
>
> MOVQ clear_user(p + 4096 - 4, 8) = 8
> REP STOSQ clear_user(p + 4096 - 4, 8) = 8
> REP STOSB clear_user(p + 4096 - 4, 8) = 4
>
> Patch fixes incomplete clear on 32-bit and 64-bit REP STOSQ, MOVQ.
So please flip the order of the changes around so that we never have this
inconsistency observable: i.e. first update the existing clearing method, then
move it and introduce the new variants without having to patch them.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists