[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111221000231.GX13529@google.com>
Date: Tue, 20 Dec 2011 16:02:31 -0800
From: Mandeep Singh Baines <msb@...omium.org>
To: Tejun Heo <tj@...nel.org>
Cc: Mandeep Singh Baines <msb@...omium.org>,
Li Zefan <lizf@...fujitsu.com>,
LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Paul Menage <paul@...lmenage.org>
Subject: Re: [PATCH 5/5] cgroup: separate out cgroup_attach_proc error
handling code
Tejun Heo (tj@...nel.org) wrote:
> Hello,
>
> On Tue, Dec 20, 2011 at 03:14:33PM -0800, Mandeep Singh Baines wrote:
> > @@ -2067,9 +2067,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > read_unlock(&tasklist_lock);
> >
> > /* methods shouldn't be called if no task is actually migrating */
> > - retval = 0;
> > - if (!group_size)
> > + if (!group_size) {
> > + retval = 0;
> > goto out_free_group_list;
> > + }
>
> Eh... I don't think this is an improvement. It's just different.
>
The main benefit is that the comment is directly above the code its
describing but I can drop this part of the change.
> > @@ -2126,20 +2127,20 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > */
> > synchronize_rcu();
> > cgroup_wakeup_rmdir_waiter(cgrp);
> > - retval = 0;
> > + flex_array_free(group);
> > + return 0;
>
> Hmm... maybe goto out_free_group_list? Duplicating cleanup on success
> and failure paths can lead future updaters forget one of them. The
> exit path in this function isn't pretty but I don't think the proposed
> patch improves it either.
>
Should I drop the patch or add the goto? Its 5/5 so easy enough to drop
since nothing else depends on it.
> Thanks.
>
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists