[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190305040714.GB21739@redhat.com>
Date: Mon, 4 Mar 2019 23:07:15 -0500
From: Mike Snitzer <snitzer@...hat.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: linux-next@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
dm-devel@...hat.com
Subject: Re: x86 VM Boot hang with latest linux-next
On Sun, Mar 03 2019 at 12:06pm -0500,
Alexander Duyck <alexander.duyck@...il.com> wrote:
> On Sat, Mar 2, 2019 at 7:48 PM Mike Snitzer <snitzer@...hat.com> wrote:
> >
> > On Sat, Mar 02 2019 at 6:34pm -0500,
> > Alexander Duyck <alexander.duyck@...il.com> wrote:
> >
> > > So I have been seeing an issue with an intermittent boot hang on my
> > > x86 KVM VM with the latest linux-next and have bisected it down to the
> > > following commit:
> > > 1efa3bb79d3de8ca1b7f6770313a1fc0bebe25c7 is the first bad commit
> > > commit 1efa3bb79d3de8ca1b7f6770313a1fc0bebe25c7
> > > Author: Mike Snitzer <snitzer@...hat.com>
> > > Date: Fri Feb 22 11:23:01 2019 -0500
> > >
> > > dm: must allocate dm_noclone for stacked noclone devices
> > >
> > > Otherwise various lvm2 testsuite tests fail because the lower layers of
> > > the stacked noclone device aren't updated to allocate a new 'struct
> > > dm_clone' that reflects the upper layer bio that was issued to it.
> > >
> > > Fixes: 97a89458020b38 ("dm: improve noclone bio support")
> > > Reported-by: Mikulas Patocka <mpatocka@...hat.com>
> > > Signed-off-by: Mike Snitzer <snitzer@...hat.com>
> > >
> > > What I am seeing is in about 3 out of 4 boots the startup just hangs
> > > at the filesystem check stage with the following message:
> > > [ OK ] Reached target Local File Systems (Pre).
> > > Starting File System Check on /dev/…127-ad57-426f-bb45-363950544c0c...
> > > [ **] (1 of 2) A start job is running for…n on device 252:2 (19s / no limit)
> > >
> > > I did some googling and it looks like a similar issue has been
> > > reported for s390. Based on the request for data there I have the
> > > following info:
> > > [root@...alhost ~]# dmsetup ls --tree
> > > fedora-swap (253:1)
> > > └─ (252:2)
> > > fedora-root (253:0)
> > > └─ (252:2)
> > >
> > > [root@...alhost ~]# dmsetup table
> > > fedora-swap: 0 4194304 linear 252:2 2048
> > > fedora-root: 0 31457280 linear 252:2 4196352
> >
> > Thanks, which version of Fedora are you running?
>
> The VM is running Fedora 27 with a kernel built off of latest
> linux-next as of March 1st.
>
> > Your case is more straightforward in that you're clearly using bio-based
> > DM linear (which was updated to leverage "noclone" support); whereas the
> > s390 case is using request-based DM which isn't impacted by the commit
> > in question at all.
> >
> > I'll attempt to reproduce first thing Monday.
> >
> > Mike
>
> Thanks. The behavior of it has me wondering if we are looking at
> something like an uninitialized data issue or something like that
> since as I mentioned I don't see this occur on every boot, just on
> most of them. So every now and then I can boot up the VM without any
> issues, but most of the time it will boot and then get stuck waiting
> on jobs that take forever.
I just copied you on another related thread, but for the benefit of
anyone on LKML, please see the following for a fix that works for me:
https://www.redhat.com/archives/dm-devel/2019-March/msg00027.html
Powered by blists - more mailing lists