lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 05 Dec 2011 09:08:04 -0600
From:	Mike wolf <mjw@...ux.vnet.ibm.com>
To:	linux-kernel@...r.kernel.org
Subject: [PATCH] Do not include throttled time as steal time

When the linux kernel is running as the guest OS and is configured
for bandwidth control and steal time reporting, it can be confusing
to users to see the throttled time show up in the steal time stats.
The user will think they are not getting the cpu resources they have
been configured.

Signed-off-by: Mike Wolf <mjw@...ux.vnet.ibm.com>
---
  kernel/sched_fair.c  |    4 ++--
  kernel/sched_stats.h |    7 ++++++-
  2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 5c9e679..a837e4e 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -707,7 +707,7 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct 
sched_entity *se)

  #ifdef CONFIG_FAIR_GROUP_SCHED
  /* we need this in update_cfs_load and load-balance functions below */
-static inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
+inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
  # ifdef CONFIG_SMP
  static void update_cfs_rq_load_contribution(struct cfs_rq *cfs_rq,
                          int global_update)
@@ -1420,7 +1420,7 @@ static inline int cfs_rq_throttled(struct cfs_rq 
*cfs_rq)
  }

  /* check whether cfs_rq, or any parent, is throttled */
-static inline int throttled_hierarchy(struct cfs_rq *cfs_rq)
+inline int throttled_hierarchy(struct cfs_rq *cfs_rq)
  {
      return cfs_rq->throttle_count;
  }
diff --git a/kernel/sched_stats.h b/kernel/sched_stats.h
index 87f9e36..e30ff26 100644
--- a/kernel/sched_stats.h
+++ b/kernel/sched_stats.h
@@ -213,14 +213,19 @@ static inline void sched_info_queued(struct 
task_struct *t)
   * sched_info_queued() to mark that it has now again started waiting on
   * the runqueue.
   */
+extern inline int throttled_hierarchy(struct cfs_rq *cfs_rq);
  static inline void sched_info_depart(struct task_struct *t)
  {
+    struct task_group *tg = task_group(t);
+    struct cfs_rq *cfs_rq;
      unsigned long long delta = task_rq(t)->clock -
                      t->sched_info.last_arrival;

+    cfs_rq = tg->cfs_rq[smp_processor_id()];
      rq_sched_info_depart(task_rq(t), delta);

-    if (t->state == TASK_RUNNING)
+
+    if (t->state == TASK_RUNNING && !throttled_hierarchy(cfs_rq))
          sched_info_queued(t);
  }

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ