[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140829172216.18aac86a@synchrony.poochiereds.net>
Date: Fri, 29 Aug 2014 17:22:16 -0400
From: Jeff Layton <jeff.layton@...marydata.com>
To: "J. Bruce Fields" <bfields@...hat.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Nikita Yushchenko <nyushchenko@....rtsoft.ru>,
stable@...r.kernel.org, Raphos <raphoszap@...oste.net>,
Stanislav Kinsbursky <skinsbursky@...allels.com>,
"'Alexey Lugovskoy'" <lugovskoy@....rtsoft.ru>,
Konstantin Kholopov <kkholopov@....rtsoft.ru>,
linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org
Subject: Re: 3.10.y regression caused by: lockd: ensure we tear down any
live sockets when socket creation fails during lockd_up
On Fri, 29 Aug 2014 16:25:33 -0400
"J. Bruce Fields" <bfields@...hat.com> wrote:
> On Mon, Jul 07, 2014 at 03:27:21PM -0700, Greg Kroah-Hartman wrote:
> > On Fri, Jun 20, 2014 at 03:14:03PM +0400, Nikita Yushchenko wrote:
> > > With current 3.10.y, if kernel is booted with init=/bin/sh and then nfs mount
> > > is attempted (without portmap or rpcbind running) using busybox mount, following
> > > OOPS happen:
> > >
> > > # mount -t nfs 10.30.130.21:/opt /mnt
> > > svc: failed to register lockdv1 RPC service (errno 111).
> > > lockd_up: makesock failed, error=-111
> > > Unable to handle kernel paging request for data at address 0x00000030
> > > Faulting instruction address: 0xc055e65c
> > > Oops: Kernel access of bad area, sig: 11 [#1]
> > > MPC85xx CDS
> > > Modules linked in:
> > > CPU: 0 PID: 1338 Comm: mount Not tainted 3.10.44.cge #117
> > > task: cf29cea0 ti: cf35c000 task.ti: cf35c000
> > > NIP: c055e65c LR: c0566490 CTR: c055e648
> > > REGS: cf35dad0 TRAP: 0300 Not tainted (3.10.44.cge)
> > > MSR: 00029000 <CE,EE,ME> CR: 22442488 XER: 20000000
> > > DEAR: 00000030, ESR: 00000000
> > >
> > > GPR00: c05606f4 cf35db80 cf29cea0 cf0ded80 cf0dedb8 00000001 1dec3086 00000000
> > > GPR08: 00000000 c07b1640 00000007 1dec3086 22442482 100b9758 00000000 10090ae8
> > > GPR16: 00000000 000186a5 00000000 00000000 100c3018 bfa46edc 100b0000 bfa46ef0
> > > GPR24: cf386ae0 c07834f0 00000000 c0565f88 00000001 cf0dedb8 00000000 cf0ded80
> > > NIP [c055e65c] call_start+0x14/0x34
> > > LR [c0566490] __rpc_execute+0x70/0x250
> > > Call Trace:
> > > [cf35db80] [00000080] 0x80 (unreliable)
> > > [cf35dbb0] [c05606f4] rpc_run_task+0x9c/0xc4
> > > [cf35dbc0] [c0560840] rpc_call_sync+0x50/0xb8
> > > [cf35dbf0] [c056ee90] rpcb_register_call+0x54/0x84
> > > [cf35dc10] [c056f24c] rpcb_register+0xf8/0x10c
> > > [cf35dc70] [c0569e18] svc_unregister.isra.23+0x100/0x108
> > > [cf35dc90] [c0569e38] svc_rpcb_cleanup+0x18/0x30
> > > [cf35dca0] [c0198c5c] lockd_up+0x1dc/0x2e0
> > > [cf35dcd0] [c0195348] nlmclnt_init+0x2c/0xc8
> > > [cf35dcf0] [c015bb5c] nfs_start_lockd+0x98/0xec
> > > [cf35dd20] [c015ce6c] nfs_create_server+0x1e8/0x3f4
> > > [cf35dd90] [c0171590] nfs3_create_server+0x10/0x44
> > > [cf35dda0] [c016528c] nfs_try_mount+0x158/0x1e4
> > > [cf35de20] [c01670d0] nfs_fs_mount+0x434/0x8c8
> > > [cf35de70] [c00cd3bc] mount_fs+0x20/0xbc
> > > [cf35de90] [c00e4f88] vfs_kern_mount+0x50/0x104
> > > [cf35dec0] [c00e6e0c] do_mount+0x1d0/0x8e0
> > > [cf35df10] [c00e75ac] SyS_mount+0x90/0xd0
> > > [cf35df40] [c000ccf4] ret_from_syscall+0x0/0x3c
> > > --- Exception: c01 at 0xff2acc4
> > > LR = 0x10048ab8
> > > Instruction dump:
> > > 3d20c056 3929e648 91230028 38600001 4e800020 38600000 4e800020 81230014
> > > 8103000c 81490014 394a0001 91490014 <81280030> 81490018 394a0001 91490018
> > > ---[ end trace 033b5b4715cb5452 ]---
> > >
> > >
> > > This does not happen if
> > >
> > > commit 72a6e594497032bd911bd187a88fae4b4473abb3
> > > Author: Jeff Layton <jlayton@...hat.com>
> > > Date: Tue Mar 25 11:55:26 2014 -0700
> > >
> > > lockd: ensure we tear down any live sockets when socket creation fails during lockd_up
> > >
> > > commit 679b033df48422191c4cac52b610d9980e019f9b upstream.
> > >
> > > is reverted:
> > >
> > > # mount -t nfs 10.30.130.21:/opt /mnt
> > > svc: failed to register lockdv1 RPC service (errno 111).
> > > lockd_up: makesock failed, error=-111
> > > mount: mounting 10.30.130.21:/opt on /mnt failed: Connection refused
> > > #
> > >
> > >
> > > Physical reason of the OOPS is that:
> > >
> > > - addition of svc_shutdown_net() call to error path of make_socks() causes
> > > double call of svc_rpcb_cleanup():
> > > - first call is from within svc_shutdown_net(), because serv->sv_shutdown
> > > points to svc_rpcb_cleanup() at this time,
> > > - immediately followed by second call from lockd_up_net()'s error path
> > >
> > > - when second svc_rpcb_cleanup() is executed, then at
> > > svc_unregister() -> __svc_unregister() -> rpcb_register() -> rpcb_register_call()
> > > call path, rpcb_register_call() is called with clnt=NULL.
> >
> > So, Jeff, what should I do here? Drop this patch from 3.10? Add
> > something else to fix it up? Something else entirely?
>
> Sorry this got ignored. Adding more useful addressess....
>
> So looks like the new svc_shutdown_net made lockd_up_net's cleanup
> redundant, and just removing it might do the job?
>
> --b.
>
> diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
> index 673668a9eec1..685e953c5103 100644
> --- a/fs/lockd/svc.c
> +++ b/fs/lockd/svc.c
> @@ -253,13 +253,11 @@ static int lockd_up_net(struct svc_serv *serv, struct net *net)
>
> error = make_socks(serv, net);
> if (error < 0)
> - goto err_socks;
> + goto err_bind;
> set_grace_period(net);
> dprintk("lockd_up_net: per-net data created; net=%p\n", net);
> return 0;
>
> -err_socks:
> - svc_rpcb_cleanup(serv, net);
> err_bind:
> ln->nlmsvc_users--;
> return error;
Oof -- sorry I missed this. Must have gotten lost in the shuffle with my
email address change...
Yeah, that patch looks correct to me. I do wish the whole svc
setup/shutdown codepath weren't so godawful complicated, but that's not
a trivial thing to untangle at this point (particularly not in the
context of -stable).
Acked-by: Jeff Layton <jlayton@...marydata.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists