[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160425232552.GD18496@dastard>
Date: Tue, 26 Apr 2016 09:25:52 +1000
From: Dave Chinner <david@...morbit.com>
To: "Verma, Vishal L" <vishal.l.verma@...el.com>
Cc: "hch@...radead.org" <hch@...radead.org>,
"Wilcox, Matthew R" <matthew.r.wilcox@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"xfs@....sgi.com" <xfs@....sgi.com>,
"linux-nvdimm@...1.01.org" <linux-nvdimm@...1.01.org>,
"jmoyer@...hat.com" <jmoyer@...hat.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"axboe@...com" <axboe@...com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-ext4@...r.kernel.org" <linux-ext4@...r.kernel.org>,
"jack@...e.cz" <jack@...e.cz>
Subject: Re: [PATCH v2 5/5] dax: handle media errors in dax_do_io
On Mon, Apr 25, 2016 at 05:14:36PM +0000, Verma, Vishal L wrote:
> On Mon, 2016-04-25 at 01:31 -0700, hch@...radead.org wrote:
> > On Sat, Apr 23, 2016 at 06:08:37PM +0000, Verma, Vishal L wrote:
> > >
> > > direct_IO might fail with -EINVAL due to misalignment, or -ENOMEM
> > > due
> > > to some allocation failing, and I thought we should return the
> > > original
> > > -EIO in such cases so that the application doesn't lose the
> > > information
> > > that the bad block is actually causing the error.
> > EINVAL is a concern here. Not due to the right error reported, but
> > because it means your current scheme is fundamentally broken - we
> > need to support I/O at any alignment for DAX I/O, and not fail due to
> > alignbment concernes for a highly specific degraded case.
> >
> > I think this whole series need to go back to the drawing board as I
> > don't think it can actually rely on using direct I/O as the EIO
> > fallback.
> >
> Agreed that DAX I/O can happen with any size/alignment, but how else do
> we send an IO through the driver without alignment restrictions? Also,
> the granularity at which we store badblocks is 512B sectors, so it
> seems natural that to clear such a sector, you'd expect to send a write
> to the whole sector.
>
> The expected usage flow is:
>
> - Application hits EIO doing dax_IO or load/store io
>
> - It checks badblocks and discovers it's files have lost data
Lots of hand-waving here. How does the application map a bad
"sector" to a file without scanning the entire filesystem to find
the owner of the bad sector?
> - It write()s those sectors (possibly converted to file offsets using
> fiemap)
> * This triggers the fallback path, but if the application is doing
> this level of recovery, it will know the sector is bad, and write the
> entire sector
Where does the application find the data that was lost to be able to
rewrite it?
> - Or it replaces the entire file from backup also using write() (not
> mmap+stores)
> * This just frees the fs block, and the next time the block is
> reallocated by the fs, it will likely be zeroed first, and that will be
> done through the driver and will clear errors
There's an implicit assumption that applications will keep redundant
copies of their data at the /application layer/ and be able to
automatically repair it? And then there's the implicit assumption
that it will unlink and free the entire file before writing a new
copy, and that then assumes the the filesystem will zero blocks if
they get reused to clear errors on that LBA sector mapping before
they are accessible again to userspace..
It seems to me that there are a number of assumptions being made
across multiple layers here. Maybe I've missed something - can you
point me to the design/architecture description so I can see how
"app does data recovery itself" dance is supposed to work?
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists