lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <561D1C53.8080302@list.ru>
Date:	Tue, 13 Oct 2015 17:59:31 +0300
From:	Stas Sergeev <stsp@...t.ru>
To:	Andy Lutomirski <luto@...nel.org>, x86@...nel.org,
	linux-kernel@...r.kernel.org
Cc:	Brian Gerst <brgerst@...il.com>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Borislav Petkov <bp@...en8.de>,
	Cyrill Gorcunov <gorcunov@...il.com>,
	Pavel Emelyanov <xemul@...allels.com>
Subject: Re: [RFC 3/4] x86/signal/64: Re-add support for SS in the 64-bit
 signal context

13.10.2015 04:04, Andy Lutomirski пишет:
> + * UC_SIGCONTEXT_SS will be set when delivering 64-bit or x32 signals on
> + * kernels that save SS in the sigcontext.  All kernels that set
> + * UC_SIGCONTEXT_SS will correctly restore at least the low 32 bits of esp
> + * regardless of SS (i.e. they implement espfix).
Is this comment relevant? I think neither signal delivery
nor sigreturn were affected by esp corruption, or were they?
I guess you suggest to use that flag as the detection
for espfix, but I don't think this is relevant: you may
need to know about espfix also outside of a signal handler.
In fact, I don't think espfix needs any run-time detection,
because then the stack fault will simply not happen, and that's all.
I think it is a matter of compile-time detection only.

> + *
> + * Kernels that set UC_SIGCONTEXT_SS will also set UC_STRICT_RESTORE_SS
> + * when delivering a signal that came from 64-bit code.
> + *
> + * Sigreturn modifies its behavior depending on the UC_STRICT_RESTORE_SS
> + * flag.  If UC_STRICT_RESTORE_SS is set, then the SS value in the
> + * signal context is restored verbatim.  If UC_STRICT_RESTORE_SS is not
> + * set, the CS value in the signal context refers to a 64-bit code
> + * segment, and the signal context's SS value is invalid, it will be
> + * replaced by an flat 32-bit selector.
> +
> + * This behavior serves two purposes.  It ensures that older programs
> + * that are unaware of the signal context's SS slot and either construct
> + * a signal context from scratch or that catch signals from segmented
> + * contexts and change CS to a 64-bit selector won't crash due to a bad
> + * SS value.  It also ensures that signal handlers that do not modify
> + * the signal context at all return back to the exact CS and SS state
> + * that they came from.
Do you need a second flag for this?
IIRC non-restoring was needed because:
1. dosemu saves SS to different place
2. If you save it yourself, dosemu can invalidate it, but not replace
with the right one because of 1.
IMHO to solve this, you need _either_ the second flag or
the heuristic, but not both.

With new flag:
Just don't set it by default, and the new progs will set it themselves.
Old progs are unaffected.
When it is set, SS should always be restored.
I prefer this approach.

With heuristic:
Save SS yourself on delivery, and, if it happens invalid on sigreturn -
replace it with better one.
Old progs are unaffected because they use iret anyway, and that iret
happens _after_ sigreturn.
New progs will never leave invalid SS in the right sigcontext slot.

So why have you choose to have both the new flag UC_STRICT_RESTORE_SS
and the heuristic?

> This is a bit risky, and another option would be to do nothing at
> all.
Andy, could you please stop pretending there are no other solutions?
You do not have to like them. You do not have to implement them.
But your continuous re-assertions that they do not exist, make me
feel a bit uncomfortable after I spelled them many times.

> Stas, what do you think?  Could you test this?
I think I'll get to testing this only at a week-end.
In a mean time, the question about a safety of leaving LDT SS
in 64bit mode still makes me wonder. Perhaps, instead of re-iterating
this here, you can describe this all in the patch comments? Namely:
- How will LDT SS interact with nested signals
- with syscalls
- with siglongjmp()
- with another thread. Do we have a per-thread or a per-process LDT
these days? If LDT is per-process, my question is what will happen
if another thread invalidates an LDT entry while we are in 64bit mode.
If LDT is per-thread, there is no such question.

> If SS starts out invalid (this can happen if the signal was caused
> by an IRET fault or was delivered on the way out of set_thread_area
> or modify_ldt), then IRET to the signal handler can fail, eventually
> killing the task.
Is this signal-pecific? I.e. the return from IRQs happens via iret too.
So if we are running with invalid SS in 64bit mode, can the iret from
IRQ also cause the problem?


On an off-topic: there was recently a patch from you that
disables vm86() by mmap_min_addr. I've found that dosemu, when
started as root, could override mmap_min_addr. I guess this will
no longer work, right? Not a big regression, just something to
know and document.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ