lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Mar 2019 22:31:56 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Josh Poimboeuf <jpoimboe@...hat.com>
Cc:     torvalds@...ux-foundation.org, tglx@...utronix.de, hpa@...or.com,
        julien.thierry@....com, will.deacon@....com, luto@...capital.net,
        mingo@...nel.org, catalin.marinas@....com, james.morse@....com,
        valentin.schneider@....com, brgerst@...il.com, luto@...nel.org,
        bp@...en8.de, dvlasenk@...hat.com, linux-kernel@...r.kernel.org,
        dvyukov@...gle.com, rostedt@...dmis.org
Subject: Re: [PATCH 18/20] objtool: Add UACCESS validation

On Fri, Mar 08, 2019 at 03:02:09PM -0600, Josh Poimboeuf wrote:
> > +static const char *uaccess_safe_builtin[] = {
> > +	/* KASAN */
> 
> A short comment would be good here, something describing why a function
> might be added to the list.

There is; but I'm thinking it might be too short?

> > +	"memset_orig", /* XXX why not memset_erms */
> > +	"__memset",
> > +	"kasan_poison_shadow",
> > +	"kasan_unpoison_shadow",
> > +	"__asan_poison_stack_memory",
> > +	"__asan_unpoison_stack_memory",

Those are gone.

> > +	"kasan_report",

That is the main kasan_report function, and is for, as the comment says:
kasan :-)

> > +	"check_memory_region",

for __asan_{load,store}N

> > +	/* KASAN out-of-line */
> > +	"__asan_loadN_noabort",
> > +	"__asan_load1_noabort",
> > +	"__asan_load2_noabort",
> > +	"__asan_load4_noabort",
> > +	"__asan_load8_noabort",
> > +	"__asan_load16_noabort",
> > +	"__asan_storeN_noabort",
> > +	"__asan_store1_noabort",
> > +	"__asan_store2_noabort",
> > +	"__asan_store4_noabort",
> > +	"__asan_store8_noabort",
> > +	"__asan_store16_noabort",

These, are, again as the comment suggests, the out-of-line KASAN ABI
calls.

> > +	/* KASAN in-line */
> > +	"__asan_report_load_n_noabort",
> > +	"__asan_report_load1_noabort",
> > +	"__asan_report_load2_noabort",
> > +	"__asan_report_load4_noabort",
> > +	"__asan_report_load8_noabort",
> > +	"__asan_report_load16_noabort",
> > +	"__asan_report_store_n_noabort",
> > +	"__asan_report_store1_noabort",
> > +	"__asan_report_store2_noabort",
> > +	"__asan_report_store4_noabort",
> > +	"__asan_report_store8_noabort",
> > +	"__asan_report_store16_noabort",

The in-line KASAN ABI

Also, can I just say that {load,store}N vs {load,store}_n bugs the
hell out of me?

> > +	/* KCOV */
> > +	"write_comp_data",

the logger function

> > +	"__sanitizer_cov_trace_pc",
> > +	"__sanitizer_cov_trace_const_cmp1",
> > +	"__sanitizer_cov_trace_const_cmp2",
> > +	"__sanitizer_cov_trace_const_cmp4",
> > +	"__sanitizer_cov_trace_const_cmp8",
> > +	"__sanitizer_cov_trace_cmp1",
> > +	"__sanitizer_cov_trace_cmp2",
> > +	"__sanitizer_cov_trace_cmp4",
> > +	"__sanitizer_cov_trace_cmp8",

KCOV ABI

> > +	/* UBSAN */
> > +	"ubsan_type_mismatch_common",

implementation

> > +	"__ubsan_handle_type_mismatch",
> > +	"__ubsan_handle_type_mismatch_v1",

UBSAN ABI

> > +	/* misc */
> > +	"csum_partial_copy_generic",
> > +	"__memcpy_mcsafe",
> > +	"ftrace_likely_update", /* CONFIG_TRACE_BRANCH_PROFILING */
> > +	NULL
> > +};

> > +		func = find_symbol_by_name(file->elf, *name);
> 
> This won't work if the function name changes due to IPA optimizations.
> I assume these are all global functions so maybe it's fine?

With one or two exceptions, yep.

> These gotos make my head spin.  Again I would much prefer a small amount
> of code duplication over this.

I didn't think the code was that bad once you see the end result, but
sure, I can try something else.

> > +++ b/tools/objtool/special.c
> > @@ -42,6 +42,7 @@
> >  #define ALT_NEW_LEN_OFFSET	11
> >  
> >  #define X86_FEATURE_POPCNT (4*32+23)
> > +#define X86_FEATURE_SMAP   (9*32+20)
> >  
> >  struct special_entry {
> >  	const char *sec;
> > @@ -107,8 +108,15 @@ static int get_alt_entry(struct elf *elf
> >  		 * It has been requested that we don't validate the !POPCNT
> >  		 * feature path which is a "very very small percentage of
> >  		 * machines".
> > +		 *
> > +		 * Also, unconditionally enable SMAP; this avoids seeing paths
> > +		 * that pass through the STAC alternative and through the CLAC
> > +		 * NOPs.
> 
> Why is this a problem?

'obvious' violation?

STAC; .... RET; # an AC=1 leak

.... CLAC; # spurious CLAC

If we do the STAC we must also do the CLAC. If we don't do the STAC we
must also not do the CLAC.


> > +		 *
> > +		 * XXX: We could do this for all binary NOP/single-INSN
> > +		 * alternatives.
> 
> Same question here.

In general, validating NOPs isn't too interesting, so all NOP/INSN
binary alternatives could be forced on.

> >  		 */
> > -		if (feature == X86_FEATURE_POPCNT)
> > +		if (feature == X86_FEATURE_POPCNT || feature == X86_FEATURE_SMAP)
> >  			alt->skip_orig = true;
> >  	}

I've actually changed this to depend on --uaccess, when set we force on
FEATURE_SMAP, otherwise we force off.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ