[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160914163428.GA3990@naverao1-tp.localdomain>
Date: Wed, 14 Sep 2016 22:04:28 +0530
From: "Naveen N. Rao" <naveen.n.rao@...ux.vnet.ibm.com>
To: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Wang Nan <wangnan0@...wei.com>, linux-kernel@...r.kernel.org,
lizefan@...wei.com, pi3orama@....com
Subject: Re: [PATCH 1/3] tools include: Add uapi mman.h for each architecture
On 2016/09/14 10:52AM, Arnaldo Carvalho de Melo wrote:
> Em Wed, Sep 14, 2016 at 02:58:10PM +0530, Naveen N. Rao escreveu:
> > On 2016/09/12 06:15PM, Arnaldo Carvalho de Melo wrote:
> > > Em Mon, Sep 12, 2016 at 04:07:42PM -0300, Arnaldo Carvalho de Melo escreveu:
> > > So, please take a look at my perf/core branch, I applied 1/3 and 3/3,
> > > but took a different path for 2/3, now it builds for all systems I have
> > > containers for:
>
> > This still fails for me on ppc64. Perhaps we should guard
> > P_MMAP_FLAG(32BIT) and potentially others with a #ifdef, which was
> > earlier reverted by commit 256763b0 ("perf trace beauty mmap: Add more
> > conditional defines")?
>
> Humm, yeah, we have to find a way to make it clear that some flags are
> not present in all arches, I'll think about the solution Wang came up
> with, i.e. having it defined to zero on arches where it is not
> supported, which at first sounds ugly :-\
>
> One thing related to this, but for future work, is to be able to support
> doing a ' perf trace record' on x86_64 and then doing a 'perf trace -i
> perf.data' with the resulting file, i.e. cross-platform syscall arg
> beautifying, i.e. a cross-arch strace.
>
> Right now we use audit lib to at least map the syscall ids using, IIRC,
> the header env stuff, but we need to go to the syscall args as well.
> Future work, sure.
>
> And yeah, I'll try and cross-build audit-lib for my powerpc cross build
> containers, so that I can catch this bug before applying these patches
> and make sure things like this get caught in the future.
Thanks, Arnaldo!
Though, if it is too much work, I don't mind reporting/trying to fix the
odd build failure. I've setup daily builds to catch any such issues.
Regards,
Naveen
Powered by blists - more mailing lists