[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210309200140.GA237657@agluck-desk2.amr.corp.intel.com>
Date: Tue, 9 Mar 2021 12:01:40 -0800
From: "Luck, Tony" <tony.luck@...el.com>
To: HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>
Cc: Aili Yao <yaoaili@...gsoft.com>,
Oscar Salvador <osalvador@...e.de>,
"david@...hat.com" <david@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"bp@...en8.de" <bp@...en8.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"linux-edac@...r.kernel.org" <linux-edac@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"yangfeng1@...gsoft.com" <yangfeng1@...gsoft.com>
Subject: Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned
On Tue, Mar 09, 2021 at 08:28:24AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
> > When the page is already poisoned, another memory_failure() call in the
> > same page now return 0, meaning OK. For nested memory mce handling, this
> > behavior may lead to mce looping, Example:
> >
> > 1.When LCME is enabled, and there are two processes A && B running on
> > different core X && Y separately, which will access one same page, then
> > the page corrupted when process A access it, a MCE will be rasied to
> > core X and the error process is just underway.
> >
> > 2.Then B access the page and trigger another MCE to core Y, it will also
> > do error process, it will see TestSetPageHWPoison be true, and 0 is
> > returned.
> >
> > 3.The kill_me_maybe will check the return:
> >
> > 1244 static void kill_me_maybe(struct callback_head *cb)
> > 1245 {
> >
> > 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
> > 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
> > 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
> > p->mce_whole_page);
> > 1257 sync_core();
> > 1258 return;
> > 1259 }
> >
> > 1267 }
> >
> > 4. The error process for B will end, and may nothing happened if
> > kill-early is not set, The process B will re-excute instruction and get
> > into mce again and then loop happens. And also the set_mce_nospec()
> > here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
> > mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
> >
> > For other cases which care the return value of memory_failure() should
> > check why they want to process a memory error which have already been
> > processed. This behavior seems reasonable.
>
> Other reviewers shared ideas about the returned value, but actually
> I'm not sure which the best one is (EBUSY is not that bad).
> What we need to fix the reported issue is to return non-zero value
> for "already poisoned" case (the value itself is not so important).
>
> Other callers of memory_failure() (mostly test programs) could see
> the change of return value, but they can already see EBUSY now and
> anyway they should check dmesg for more detail about why failed,
> so the impact of the change is not so big.
>
> >
> > Signed-off-by: Aili Yao <yaoaili@...gsoft.com>
>
> Reviewed-by: Naoya Horiguchi <naoya.horiguchi@....com>
I think that both this and my "add a mutex" patch are both
too simplistic for this complex problem :-(
When multiple CPUs race to call memory_failure() for the same
page we need the following results:
1) Poison page should be marked not-present in all tasks
I think the mutex patch achieves this as long as
memory_failure() doesn't hit an error[1].
2) All tasks that were executing an instruction that was accessing
the poison location should see a SIGBUS with virtual address and
BUS_MCEERR_AR signature in siginfo.
Neither solution achieves this. The -EBUSY return ensures
that there is a SIGBUS for the tasks that get the -EBUSY
return, but no siginfo details.
Just the mutex patch *might* have BUS_MCEERR_AO signature
to the race losing tasks, but only if they have PF_MCE_EARLY
set (so says the comment in kill_proc() ... but I don't
see the code checking for that bit).
#2 seems hard to achieve ... there are inherent races that mean the
AO SIGBUS could have been queued to the task before it even hits
the poison.
Maybe should include a non-action:
3) A task should only see one SIGBUS per poison?
Not sure if this is achievable either ... what if the task
has the same page mapped multiple times?
-Tony
[1] still looking at why my futex injection test ends with a "reserved
kernel page still referenced by 1 users"
Powered by blists - more mailing lists