lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Sep 2007 18:25:37 +0200
From:	"Dmitry Adamushko" <dmitry.adamushko@...il.com>
To:	vatsa@...ux.vnet.ibm.com
Cc:	"Andrew Morton" <akpm@...ux-foundation.org>,
	ckrm-tech@...ts.sourceforge.net, linux-kernel@...r.kernel.org,
	containers@...ts.osdl.org,
	"Jan Engelhardt" <jengelh@...putergmbh.de>,
	"Ingo Molnar" <mingo@...e.hu>, dhaval@...ux.vnet.ibm.com,
	menage@...gle.com
Subject: Re: [PATCH] Hookup group-scheduler with task container infrastructure

Hi Srivatsa,

please find a few more minor comments below.

> [ ... ]
> +
> +/* destroy runqueue etc associated with a task group */
> +static void sched_destroy_group(struct container_subsys *ss,
> +                                       struct container *cont)
> +{
> +       struct task_grp *tg = container_tg(cont);
> +       struct cfs_rq *cfs_rq;
> +       struct sched_entity *se;
> +       int i;
> +
> +       for_each_possible_cpu(i) {
> +               cfs_rq = tg->cfs_rq[i];
> +               list_del_rcu(&cfs_rq->leaf_cfs_rq_list);
> +       }
> +
> +       /* wait for possible concurrent references to cfs_rqs complete */
> +       synchronize_sched();
> +
> +       /* now it should be safe to free those cfs_rqs */
> +       for_each_possible_cpu(i) {
> +               cfs_rq = tg->cfs_rq[i];
> +               kfree(cfs_rq);
> +
> +               se = tg->se[i];
> +               kfree(se);
> +       }
> +
> +       kfree(tg);
> +}

kfree(tg->cfs_rq) && kfree(tg->se) ?


> +
> +/* change task's runqueue when it moves between groups */
> +static void sched_move_task(struct container_subsys *ss, struct container *cont,
> +                       struct container *old_cont, struct task_struct *tsk)
> +{
> +       int on_rq, running;
> +       unsigned long flags;
> +       struct rq *rq;
> +
> +       rq = task_rq_lock(tsk, &flags);
> +
> +       if (tsk->sched_class != &fair_sched_class)
> +               goto done;

this check should be redundant now with sched_can_attach() in place.


> +static void set_se_shares(struct sched_entity *se, unsigned long shares)
> +{
> +       struct cfs_rq *cfs_rq = se->cfs_rq;
> +       struct rq *rq = cfs_rq->rq;
> +       int on_rq;
> +
> +       spin_lock_irq(&rq->lock);
> +
> +       on_rq = se->on_rq;
> +       if (on_rq)
> +               __dequeue_entity(cfs_rq, se);
> +
> +       se->load.weight = shares;
> +       se->load.inv_weight = div64_64((1ULL<<32), shares);

A bit of nit-picking... are you sure, there is no need in non '__'
versions of dequeue/enqueu() here (at least, for the sake of
update_curr())? Although, I don't have -mm at hand at this very moment
and original -rc4 (that I have at hand) doesn't already have
'se->load' at all... so will look later.

>
> --
> Regards,
> vatsa
>


-- 
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ