lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 15 Dec 2022 14:20:29 +0100
From:   Ilya Dryomov <idryomov@...il.com>
To:     xiubli@...hat.com
Cc:     jlayton@...nel.org, ceph-devel@...r.kernel.org,
        mchangir@...hat.com, lhenriques@...e.de, viro@...iv.linux.org.uk,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [PATCH v5 1/2] ceph: switch to vfs_inode_has_locks() to fix file
 lock bug

On Wed, Dec 14, 2022 at 4:35 AM <xiubli@...hat.com> wrote:
>
> From: Xiubo Li <xiubli@...hat.com>
>
> For the POSIX locks they are using the same owner, which is the
> thread id. And multiple POSIX locks could be merged into single one,
> so when checking whether the 'file' has locks may fail.
>
> For a file where some openers use locking and others don't is a
> really odd usage pattern though. Locks are like stoplights -- they
> only work if everyone pays attention to them.
>
> Just switch ceph_get_caps() to check whether any locks are set on
> the inode. If there are POSIX/OFD/FLOCK locks on the file at the
> time, we should set CHECK_FILELOCK, regardless of what fd was used
> to set the lock.
>
> Cc: stable@...r.kernel.org
> Cc: Jeff Layton <jlayton@...nel.org>
> Fixes: ff5d913dfc71 ("ceph: return -EIO if read/write against filp that lost file locks")
> Reviewed-by: Jeff Layton <jlayton@...nel.org>
> Signed-off-by: Xiubo Li <xiubli@...hat.com>
> ---
>  fs/ceph/caps.c  | 2 +-
>  fs/ceph/locks.c | 4 ----
>  fs/ceph/super.h | 1 -
>  3 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
> index 065e9311b607..948136f81fc8 100644
> --- a/fs/ceph/caps.c
> +++ b/fs/ceph/caps.c
> @@ -2964,7 +2964,7 @@ int ceph_get_caps(struct file *filp, int need, int want, loff_t endoff, int *got
>
>         while (true) {
>                 flags &= CEPH_FILE_MODE_MASK;
> -               if (atomic_read(&fi->num_locks))
> +               if (vfs_inode_has_locks(inode))
>                         flags |= CHECK_FILELOCK;
>                 _got = 0;
>                 ret = try_get_cap_refs(inode, need, want, endoff,
> diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
> index 3e2843e86e27..b191426bf880 100644
> --- a/fs/ceph/locks.c
> +++ b/fs/ceph/locks.c
> @@ -32,18 +32,14 @@ void __init ceph_flock_init(void)
>
>  static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
>  {
> -       struct ceph_file_info *fi = dst->fl_file->private_data;
>         struct inode *inode = file_inode(dst->fl_file);
>         atomic_inc(&ceph_inode(inode)->i_filelock_ref);
> -       atomic_inc(&fi->num_locks);
>  }
>
>  static void ceph_fl_release_lock(struct file_lock *fl)
>  {
> -       struct ceph_file_info *fi = fl->fl_file->private_data;
>         struct inode *inode = file_inode(fl->fl_file);
>         struct ceph_inode_info *ci = ceph_inode(inode);
> -       atomic_dec(&fi->num_locks);
>         if (atomic_dec_and_test(&ci->i_filelock_ref)) {
>                 /* clear error when all locks are released */
>                 spin_lock(&ci->i_ceph_lock);
> diff --git a/fs/ceph/super.h b/fs/ceph/super.h
> index 14454f464029..e7662ff6f149 100644
> --- a/fs/ceph/super.h
> +++ b/fs/ceph/super.h
> @@ -804,7 +804,6 @@ struct ceph_file_info {
>         struct list_head rw_contexts;
>
>         u32 filp_gen;
> -       atomic_t num_locks;
>  };
>
>  struct ceph_dir_file_info {
> --
> 2.31.1
>

Hi Xiubo,

You marked this for stable but there is an obvious dependency on
vfs_inode_has_locks() that just got merged for 6.2-rc1.  Are you
intending to take it into stable kernels as well?

Thanks,

                Ilya

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ