[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHp75VdoGShdAQFkx5PR-H6=csRA_ReaerDg6iy54AMJF+kaOg@mail.gmail.com>
Date: Mon, 8 Mar 2021 11:59:34 +0200
From: Andy Shevchenko <andy.shevchenko@...il.com>
To: Borislav Petkov <bp@...e.de>
Cc: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v1 1/1] scripts/decodecode: Decode 32-bit code correctly
on x86_64
On Sat, Mar 6, 2021 at 12:25 AM Borislav Petkov <bp@...e.de> wrote:
>
> On Fri, Mar 05, 2021 at 08:39:48PM +0200, Andy Shevchenko wrote:
> > On x86_64 host the objdump uses current architecture which is 64-bit
> > and hence decodecode shows wrong instructions.
> >
> > Fix it by supplying '-M i386' in case of ARCH i?86 or x86.
>
> At the beginning of the script says:
>
> # e.g., to decode an i386 oops on an x86_64 system, use:
> # AFLAGS=--32 decodecode < 386.oops
>
> What kind of oops are you decoding such that that doesn't work for you?
It works, but... The question here is why the script behaviour depends
so much on the architecture in question (by environment). ARM stuff is
using traditional ARCH (and that's what I have expected to work),
while x86 has a set of other variables.
So, I have to rephrase the commit message then and do actually an
alias when ARCH is set in a certain way, Would it be better?
--
With Best Regards,
Andy Shevchenko
Powered by blists - more mailing lists