[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1430519090.5537.4.camel@memnix.com>
Date: Fri, 01 May 2015 18:24:50 -0400
From: Abelardo Ricart III <aricart@...nix.com>
To: Mike Snitzer <snitzer@...hat.com>
Cc: dm-devel@...hat.com, mpatocka@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: Regression: Disk corruption with dm-crypt and kernels >= 4.0
On Fri, 2015-05-01 at 17:17 -0400, Mike Snitzer wrote:
> On Fri, May 01 2015 at 12:37am -0400,
> Abelardo Ricart III <aricart@...nix.com> wrote:
>
> > I made sure to run a completely vanilla kernel when testing why I was
> > suddenly
> > seeing some nasty libata errors with all kernels >= v4.0. Here's a snippet:
> >
> > -------------------->8--------------------
> > [ 165.592136] ata5.00: exception Emask 0x60 SAct 0x7000 SErr 0x800 action
> > 0x6
> > frozen
> > [ 165.592140] ata5.00: irq_stat 0x20000000, host bus error
> > [ 165.592143] ata5: SError: { HostInt }
> > [ 165.592145] ata5.00: failed command: READ FPDMA QUEUED
> > [ 165.592149] ata5.00: cmd 60/08:60:a0:0d:89/00:00:07:00:00/40 tag 12 ncq
> > 4096
> > in
> > res 40/00:74:40:58:5d/00:00:00:00:00/40 Emask 0x60
> > (host bus error)
> > [ 165.592151] ata5.00: status: { DRDY }
> > -------------------->8--------------------
> >
> > After a few dozen of these errors, I'd suddenly find my system in read-only
> > mode with corrupted files throughout my encrypted filesystems (seemed like
> > either a read or a write would corrupt a file, though I could be mistaken).
> > I
> > decided to do a git bisect with a random read-write-sync test to narrow down
> > the culprit, which turned out to be this commit (part of a series):
> >
> > # first bad commit: [cf2f1abfbd0dba701f7f16ef619e4d2485de3366] dm crypt:
> > don't
> > allocate pages for a partial request
> >
> > Just to be sure, I created a patch to revert the entire nine patch series
> > that
> > commit belonged to... and the bad behavior disappeared. I've now been
> > running
> > kernel 4.0 for a few days without issue, and went so far as to stress test
> > my
> > poor SSD for a few hours to be 100% positive.
> >
> > Here's some more info on my setup.
> >
> > -------------------->8--------------------
> > $ lsblk -f
> > NAME FSTYPE LABEL MOUNTPOINT
> > sda
> > ├─sda1 vfat /boot/EFI
> > ├─sda2 ext4 /boot
> > └─sda3 LVM2_member
> > ├─SSD-root crypto_LUKS
> > │ └─root f2fs /
> > └─SSD-home crypto_LUKS
> > └─home f2fs /home
> >
> > $ cat /proc/cmdline
> > BOOT_IMAGE=/vmlinuz-linux-memnix cryptdevice=/dev/SSD/root:root:allow
> > -discards
> > root=/dev/mapper/root acpi_osi=Linux security=tomoyo
> > TOMOYO_trigger=/usr/lib/systemd/systemd intel_iommu=on
> > modprobe.blacklist=nouveau rw quiet
> >
> > $ cat /etc/lvm/lvm.conf | grep "issue_discards"
> > issue_discards = 1
> > -------------------->8--------------------
> >
> > If there's anything else I can do to help diagnose the underlying problem,
> > I'm
> > more than willing.
>
> The patchset in question was tested quite heavily so this is a
> surprising report. I'm noticing you are opting in to dm-crypt discard
> support. Have you tested without discards enabled?
I've disabled discards universally and rebuilt a vanilla kernel. After running
my heavy read-write-sync scripts, everything seems to be working fine now. I
suppose this could be something that used to fail silently before, but now
produces bad behavior? I seem to remember having something in my message log
about "discards not supported on this device" when running with it enabled
before.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists