lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 7 Feb 2020 10:41:58 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Damien Le Moal <damien.lemoal@....com>
Cc:     linux-fsdevel@...r.kernel.org, linux-xfs@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Johannes Thumshirn <jth@...nel.org>,
        Naohiro Aota <naohiro.aota@....com>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Hannes Reinecke <hare@...e.de>
Subject: Re: [PATCH v12 2/2] zonefs: Add documentation

On Thu, Feb 06, 2020 at 02:26:31PM +0900, Damien Le Moal wrote:
> Add the new file Documentation/filesystems/zonefs.txt to document
> zonefs principles and user-space tool usage.
> 
> Signed-off-by: Damien Le Moal <damien.lemoal@....com>
> ---
>  Documentation/filesystems/zonefs.txt | 404 +++++++++++++++++++++++++++
>  MAINTAINERS                          |   1 +
>  2 files changed, 405 insertions(+)
>  create mode 100644 Documentation/filesystems/zonefs.txt

Looks largely OK to me. A few small nits below in the new error handling text,
but otherwise

Reviewed-by: Dave Chinner <dchinner@...hat.com>

> +IO error handling
> +-----------------
> +
> +Zoned block devices may fail I/O requests for reasons similar to regular block
> +devices, e.g. due to bad sectors. However, in addition to such known I/O
> +failure pattern, the standards governing zoned block devices behavior define
> +additional conditions that result in I/O errors.
> +
> +* A zone may transition to the read-only condition (BLK_ZONE_COND_READONLY):
> +  While the data already written in the zone is still readable, the zone can
> +  no longer be written. No user action on the zone (zone management command or
> +  read/write access) can change the zone condition back to a normal read/write
> +  state. While the reasons for the device to transition a zone to read-only
> +  state are not defined by the standards, a typical cause for such transition
> +  would be a defective write head on an HDD (all zones under this head are
> +  changed to read-only).
> +
> +* A zone may transition to the offline condition (BLK_ZONE_COND_OFFLINE):
> +  An offline zone cannot be read nor written. No user action can transition an
> +  offline zone back to an operational good state. Similarly to zone read-only
> +  transitions, the reasons for a drive to transition a zone to the offline
> +  condition are undefined. A typical cause would be a defective read-write head
> +  on an HDD causing all zones on the platter under the broken head to be
> +  inaccessible.
> +
> +* Unaligned write errors: These errors result from the host issuing write
> +  requests with a start sector that does not correspond to a zone write pointer
> +  position when the write request is executed by the device. Even though zonefs
> +  enforces sequential file write for sequential zones, unaligned write errors
> +  may still happen in the case of a partial failure of a very large direct I/O
> +  operation split into multiple BIOs/requests or asynchronous I/O operations.
> +  If one of the write request within the set of sequential write requests
> +  issued to the device fails, all write requests after queued after it will
> +  become unaligned and fail.
> +
> +* Delayed write errors: similarly to regular block devices, if the device side
> +  write cache is enabled, write errors may occur in ranges of previously
> +  completed writes when the device write cache is flushed, e.g. on fsync().
> +  Similarly to the previous immediate unaligned write error case, delayed write
> +  errors can propagate through a stream of cached sequential data for a zone
> +  causing all data to be dropped after the sector that caused the error.
> +
> +All I/O errors detected by zonefs are always notified to the user with an error

s/always//

> +code return for the system call that trigered or detected the error. The
> +recovery actions taken by zonefs in response to I/O errors depend on the I/O
> +type (read vs write) and on the reason for the error (bad sector, unaligned
> +writes or zone condition change).
> +
> +* For read I/O errors, zonefs does not execute any particular recovery action,
> +  but only if the file zone is still in a good condition and there is no
> +  inconsistency between the file inode size and its zone write pointer position.
> +  If a problem is detected, I/O error recovery is executed (see below table).
> +
> +* For write I/O errors, zonefs I/O error recovery is always executed.
> +
> +* A zone condition change to read-only or offline also always triggers zonefs
> +  I/O error recovery.
> +
> +Zonefs minimal I/O error recovery may change a file size and a file access
> +permissions.
> +
> +* File size changes:
> +  Immediate or delayed write errors in a sequential zone file may cause the file
> +  inode size to be inconsistent with the amount of data successfully written in
> +  the file zone. For instance, the partial failure of a multi-BIO large write
> +  operation will cause the zone write pointer to advance partially, eventhough

"even though"

> +  the entire write operation will be reported as failed to the user. In such
> +  case, the file inode size must be advanced to reflect the zone write pointer
> +  change and eventually allow the user to restart writing at the end of the
> +  file.
> +  A file size may also be reduced to reflect a delayed write error detected on
> +  fsync(): in this case, the amount of data effectively written in the zone may
> +  be less than originally indicated by the file inode size. After such I/O
> +  error zonefs always fixes a file inode size to reflect the amount of data

"error, zonefs" ?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ