[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190430125310.GH3562@mellanox.com>
Date: Tue, 30 Apr 2019 12:53:16 +0000
From: Jason Gunthorpe <jgg@...lanox.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Doug Ledford <dledford@...hat.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] Please pull RDMA subsystem changes
On Sun, Apr 28, 2019 at 05:09:08PM -0700, Linus Torvalds wrote:
> On Sun, Apr 28, 2019 at 4:49 PM Jason Gunthorpe <jgg@...lanox.com> wrote:
> >
> > It is for high availability - we have situations where the hardware
> > can fault and needs some kind of destructive recovery. For instance a
> > firmware reboot, or a VM migration.
> >
> > In these designs there may be multiple cards in the system and the
> > userspace application could be using both. Just because one card
> > crashed we can't send SIGBUS and kill the application, that breaks the
> > HA design.
>
> Why can't this magical application that is *so* special that it is HA
> and does magic mmap's of special rdma areas just catch the SIGBUS?
>
> Honestly, the whole "it's for HA" excuse stinks. It stinks because you
> now silently just replace the mapping with *garbage*. That's not HA,
> that's just random.
This should only used in cases where user space only writes to the BAR
page (it is an interrupt to the device essentially), so it doesn't
care that the pages are now garbage, we just need to redirect the
writes away from the bar.
However I think someone later on added a readable counter BAR pages to
one of the devices :( So even that ideal wasn't respected.
> Wouldn't it be a lot better to just get the SIGBUS, and then that
> magical application knows that "oh, it's gone", and it could - in its
> SIGBUS handler - just do the dummy anonymous mmap() with /dev/zero it
> if it wants to?
This does sound more appealing, and probably should have been done
instead. All this VMA stuff has been a big pain in the long run
Thanks,
Jason
Powered by blists - more mailing lists