lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100212115947.GC8589@ghostprotocols.net>
Date:	Fri, 12 Feb 2010 09:59:47 -0200
From:	Arnaldo Carvalho de Melo <acme@...radead.org>
To:	Anton Blanchard <anton@...ba.org>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Mackerras <paulus@...ba.org>, Ingo Molnar <mingo@...e.hu>,
	Frederic Weisbecker <fweisbec@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: perf annotate SEGVs

Em Fri, Feb 12, 2010 at 07:17:24PM +1100, Anton Blanchard escreveu:
> I think I understand a problem in perf annotate where I see random corruption
> (rb tree issues, glibc malloc failures etc).
> 
> The issue happens with zero length symbols, in this particular case they
> are kernel functions written entirely in assembly, eg .copy_4K_page,
> .__copy_tofrom_user and .memcpy:
> 
>    Num:    Value          Size Type    Bind   Vis      Ndx Name
>  63516: c00000000004a774   212 FUNC    GLOBAL DEFAULT    1 .devm_ioremap_prot
>  69095: c00000000004a848     0 FUNC    GLOBAL DEFAULT    1 .copy_4K_page
>  62002: c00000000004aa00     0 FUNC    GLOBAL DEFAULT    1 .__copy_tofrom_user
>  50576: c00000000004b000     0 FUNC    GLOBAL DEFAULT    1 .memcpy
>  69557: c00000000004b278   176 FUNC    GLOBAL DEFAULT    1 .copy_in_user
>  51841: c00000000004b328   144 FUNC    GLOBAL DEFAULT    1 .copy_to_user
> 
> In symbol_filter we look at the length of each symbol:
> 
> static int symbol_filter(struct map *map __used, struct symb
> ...
>                 const int size = (sizeof(*priv->hist) +
>                                  (sym->end - sym->start) * sizeof(u64));
>  
> And since start == end we create 0 bytes of space for the ip[] array.
> 
> Later on in hist_hit we then start indexing off this array:
> 
>        h->ip[offset]++;
> 
> Which then corrupts whatever is next in memory. With large assembly functions
> we corrupt a lot :)
> 
> How should we fix this? Do we need to do a first pass through our symbols
> to fixup ->end before allocating the ->ip[] arrays?

We have already symbols__fixup_end() for doing that:

        /*
         * For misannotated, zeroed, ASM function sizes.
         */
        if (nr > 0) {
                symbols__fixup_end(&self->symbols[map->type]);
                if (kmap) {
                        /*
                         * We need to fixup this here too because we create new
                         * maps here, for things like vsyscall sections.
                         */
                        __map_groups__fixup_end(kmap->kmaps, map->type);
                }
        }

but, as you show, there are code paths that don't reach this part...

Investigating.

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ