lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Mar 2019 10:04:44 +0000
From:   Luis Henriques <lhenriques@...e.com>
To:     "Yan\, Zheng" <ukernel@...il.com>
Cc:     "Yan\, Zheng" <zyan@...hat.com>, Sage Weil <sage@...hat.com>,
        Ilya Dryomov <idryomov@...il.com>,
        ceph-devel <ceph-devel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        stable@...r.kernel.org
Subject: Re: [PATCH] ceph: Fix a memory leak in ci->i_head_snapc

Luis Henriques <lhenriques@...e.com> writes:

> "Yan, Zheng" <ukernel@...il.com> writes:
>
>> On Tue, Mar 19, 2019 at 12:22 AM Luis Henriques <lhenriques@...e.com> wrote:
>>>
>>> "Yan, Zheng" <ukernel@...il.com> writes:
>>>
>>> > On Mon, Mar 18, 2019 at 6:33 PM Luis Henriques <lhenriques@...e.com> wrote:
>>> >>
>>> >> "Yan, Zheng" <ukernel@...il.com> writes:
>>> >>
>>> >> > On Fri, Mar 15, 2019 at 7:13 PM Luis Henriques <lhenriques@...e.com> wrote:
>>> >> >>
>>> >> >> I'm occasionally seeing a kmemleak warning in xfstest generic/013:
>>> >> >>
>>> >> >> unreferenced object 0xffff8881fccca940 (size 32):
>>> >> >>   comm "kworker/0:1", pid 12, jiffies 4295005883 (age 130.648s)
>>> >> >>   hex dump (first 32 bytes):
>>> >> >>     01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00  ................
>>> >> >>     00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>>> >> >>   backtrace:
>>> >> >>     [<00000000d741a1ea>] build_snap_context+0x5b/0x2a0
>>> >> >>     [<0000000021a00533>] rebuild_snap_realms+0x27/0x90
>>> >> >>     [<00000000ac538600>] rebuild_snap_realms+0x42/0x90
>>> >> >>     [<000000000e955fac>] ceph_update_snap_trace+0x2ee/0x610
>>> >> >>     [<00000000a9550416>] ceph_handle_snap+0x317/0x5f3
>>> >> >>     [<00000000fc287b83>] dispatch+0x362/0x176c
>>> >> >>     [<00000000a312c741>] ceph_con_workfn+0x9ce/0x2cf0
>>> >> >>     [<000000004168e3a9>] process_one_work+0x1d4/0x400
>>> >> >>     [<000000002188e9e7>] worker_thread+0x2d/0x3c0
>>> >> >>     [<00000000b593e4b3>] kthread+0x112/0x130
>>> >> >>     [<00000000a8587dca>] ret_from_fork+0x35/0x40
>>> >> >>     [<00000000ba1c9c1d>] 0xffffffffffffffff
>>> >> >>
>>> >> >> It looks like it is possible that we miss a flush_ack from the MDS when,
>>> >> >> for example, umounting the filesystem.  In that case, we can simply drop
>>> >> >> the reference to the ceph_snap_context obtained in ceph_queue_cap_snap().
>>> >> >>
>>> >> >> Link: https://tracker.ceph.com/issues/38224
>>> >> >> Cc: stable@...r.kernel.org
>>> >> >> Signed-off-by: Luis Henriques <lhenriques@...e.com>
>>> >> >> ---
>>> >> >>  fs/ceph/caps.c | 7 +++++++
>>> >> >>  1 file changed, 7 insertions(+)
>>> >> >>
>>> >> >> diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
>>> >> >> index 36a8dc699448..208f4dc6f574 100644
>>> >> >> --- a/fs/ceph/caps.c
>>> >> >> +++ b/fs/ceph/caps.c
>>> >> >> @@ -1054,6 +1054,7 @@ int ceph_is_any_caps(struct inode *inode)
>>> >> >>  static void drop_inode_snap_realm(struct ceph_inode_info *ci)
>>> >> >>  {
>>> >> >>         struct ceph_snap_realm *realm = ci->i_snap_realm;
>>> >> >> +
>>> >> >>         spin_lock(&realm->inodes_with_caps_lock);
>>> >> >>         list_del_init(&ci->i_snap_realm_item);
>>> >> >>         ci->i_snap_realm_counter++;
>>> >> >> @@ -1063,6 +1064,12 @@ static void drop_inode_snap_realm(struct ceph_inode_info *ci)
>>> >> >>         spin_unlock(&realm->inodes_with_caps_lock);
>>> >> >>         ceph_put_snap_realm(ceph_sb_to_client(ci->vfs_inode.i_sb)->mdsc,
>>> >> >>                             realm);
>>> >> >> +       /*
>>> >> >> +        * ci->i_head_snapc should be NULL, but we may still be waiting for a
>>> >> >> +        * flush_ack from the MDS.  In that case, we still hold a ref for the
>>> >> >> +        * ceph_snap_context and we need to drop it.
>>> >> >> +        */
>>> >> >> +       ceph_put_snap_context(ci->i_head_snapc);
>>> >> >>  }
>>> >> >>
>>> >> >>  /*
>>> >> >
>>> >> > This does not seem right.  i_head_snapc is cleared when
>>> >> > (ci->i_wrbuffer_ref_head == 0 && ci->i_dirty_caps == 0 &&
>>> >> > ci->i_flushing_caps == 0) . Nothing do with dropping ci->i_snap_realm.
>>> >> > Did you see 'reconnect denied' during the test? If you did, the fix
>>> >> > should be in iterate_session_caps()
>>> >> >
>>> >>
>>> >> No, I didn't saw any 'reconnect denied' in the test.  The test actually
>>> >> seems to execute fine, except from the memory leak.
>>> >>
>>> >> It's very difficult to reproduce this issue, but last time I managed to
>>> >> get this memory leak to trigger I actually had some debugging code in
>>> >> drop_inode_snap_realm, something like:
>>> >>
>>> >>   if (ci->i_head_snapc)
>>> >>         printk("i_head_snapc: 0x%px\n", ci->i_head_snapc);
>>> >
>>> > please add code that prints i_wrbuffer_ref_head, i_dirty_caps,
>>> > i_flushing_caps. and try reproducing it again.
>>> >
>>>
>>> Ok, it took me a few hours, but I managed to reproduce the bug, with
>>> those extra printks.  All those values are set to 0 when the bug
>>> triggers (and i_head_snapc != NULL).
>>>
>>
>> Thanks, which test triggers this bug?
>
> That's generic/013.  It usually triggers after a few hours of running it
> in a loop (I'm using a vstart cluster for that).
>
>>
>> I searched that code, found we may fail to cleanup i_head_snap in two
>> places. One is in ceph_queue_cap_snap, Another is in
>> remove_session_caps_cb().
>
> Ah, great!  I spent a lot of time looking but I couldn't really find it.
> My bet was that ceph_queue_cap_snap was doing the ceph_get_snap_context
> and that the corresponding ceph_put_snap_context would be done in
> handle_cap_flush_ack. That's why I mentioned the missing flush_ack from
> MDS.

Something that I didn't said explicitly is that I *know* that the
unbalanced ceph_get_snap_context() is the one in function
ceph_queue_cap_snap().  I know this for sure because I've managed to
reproduce the issue several times with an instrumented kernel (with
tracepoints) that allowed me to keep track of all ceph_snap_context
operations (create, get, put, delete).

Cheers,
-- 
Luis

Powered by blists - more mailing lists