[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJedcCxstTJQiVPpW7sTMRBBPixQMd3HqNAX3S5GeXGjjXDKAQ@mail.gmail.com>
Date: Wed, 9 Nov 2022 20:56:31 +0800
From: Zheng Hacker <hackerzheng666@...il.com>
To: Greg KH <gregkh@...uxfoundation.org>
Cc: Zheng Wang <zyytlz.wz@....com>, zhengyejian1@...wei.com,
dimitri.sivanich@....com, arnd@...db.de,
linux-kernel@...r.kernel.org, alex000young@...il.com,
security@...nel.org, sivanich@....com, lkp@...el.com
Subject: Re: [PATCH v8] misc: sgi-gru: fix use-after-free error in
gru_set_context_option, gru_fault and gru_handle_user_call_os
Greg KH <gregkh@...uxfoundation.org> 于2022年11月9日周三 20:09写道:
>
> On Wed, Nov 09, 2022 at 08:04:04PM +0800, Zheng Hacker wrote:
> > Greg KH <gregkh@...uxfoundation.org> 于2022年11月9日周三 19:12写道:
> > > > /*
> > > > * If the current task is the context owner, verify that the
> > > > @@ -727,14 +728,16 @@ void gru_check_context_placement(struct gru_thread_state *gts)
> > > > */
> > > > gru = gts->ts_gru;
> > > > if (!gru || gts->ts_tgid_owner != current->tgid)
> > > > - return;
> > > > + return ret;
> > >
> > > Why does this check return "all is good!" ?
> > >
> > > Shouldn't that be an error?
> > >
> > This check is something like "if the gts has been initiiazed properly".
> > If it's not, I thinks we shouldn't treat the gts like something very
> > bad happend. Because in the later request, the gts can still have a
> > chance to be configed/updated properly. This is different from "it's
> > too bad so we have to unload gts right now". This is just my personal
> > point of view. Besides, the caller of this function have token it into
> > consider. In gru_fault, it will try again and in gru_handle_user_call_os,
> > it will return -EAGAIN. In gru_set_context_option, it will be fine
> > because there won't be any operation on gts->ts_gru or gts->ts_tgid_owner
>
> Then you need to document it why this is "success" as it is not obvious
> at all.
>
Oh yes. I will append a comment to document it with the code in the
next version of patch.
Best regards,
Zheng Wang
Powered by blists - more mailing lists