lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 02 Jul 2013 12:56:04 +0900
From:	Fernando Luis Vazquez Cao <fernando_b1@....ntt.co.jp>
To:	Frederic Weisbecker <fweisbec@...il.com>
CC:	Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
	tglx@...utronix.de, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Arjan van de Ven <arjan@...ux.intel.com>
Subject: Re: [PATCH] proc: Add workaround for idle/iowait decreasing problem.

Hi Frederic,

I'm sorry it's taken me so long to respond; I got sidetracked for
a while. Comments follow below.

On 2013/04/28 09:49, Frederic Weisbecker wrote:
> On Tue, Apr 23, 2013 at 09:45:23PM +0900, Tetsuo Handa wrote:
>> CONFIG_NO_HZ=y can cause idle/iowait values to decrease.
[...]
> It's not clear in the changelog why you see non-monotonic idle/iowait values.
>
> Looking at the previous patch from Fernando, it seems that's because we can
> race with concurrent updates from the CPU target when it wakes up from idle?
> (could be updated by drivers/cpufreq/cpufreq_governor.c as well).
>
> If so the bug has another symptom: we may also report a wrong iowait/idle time
> by accounting the last idle time twice.
>
> In this case we should fix the bug from the source, for example we can force
> the given ordering:
>
> = Write side =                          = Read side =
>
> // tick_nohz_start_idle()
> write_seqcount_begin(ts->seq)
> ts->idle_entrytime = now
> ts->idle_active = 1
> write_seqcount_end(ts->seq)
>
> // tick_nohz_stop_idle()
> write_seqcount_begin(ts->seq)
> ts->iowait_sleeptime += now - ts->idle_entrytime
> t->idle_active = 0
> write_seqcount_end(ts->seq)
>
>                                          // get_cpu_iowait_time_us()
>                                          do {
>                                              seq = read_seqcount_begin(ts->seq)
>                                              if (t->idle_active) {
>                                                  time = now - ts->idle_entrytime
>                                                  time += ts->iowait_sleeptime
>                                              } else {
>                                                  time = ts->iowait_sleeptime
>                                              }
>                                          } while (read_seqcount_retry(ts->seq, seq));
>
> Right? seqcount should be enough to make sure we are getting a consistent result.
> I doubt we need harder locking.

I tried that and it doesn't suffice. The problem that causes the most
serious skews is related to the CPU scheduler: the per-run queue
counter nr_iowait can be updated not only from the CPU it belongs
to but also from any other CPU if tasks are migrated out while
waiting on I/O.

The race looks like this:

CPU0                            CPU1
                                 [ CPU1_rq->nr_iowait == 0 ]
                                 Task foo: io_schedule()
                                             schedule()
                                 [ CPU1_rq->nr_iowait == 1) ]
                                 Task foo migrated to CPU0
                                 Goes to sleep

// get_cpu_iowait_time_us(1, NULL)
[ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 1         ]
[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
now = 5
delta = 5 - 3 = 2
iowait = 4 + 2 = 6

Task foo wakes up
[ CPU1_rq->nr_iowait == 0 ]

                                 CPU1 comes out of sleep state
                                 tick_nohz_stop_idle()
                                   update_ts_time_stats()
                                     [ CPU1_ts->idle_active == 1, CPU1_rq->nr_iowait == 0         ]
                                     [ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 3 ]
                                     now = 6
                                     delta = 6 - 3 = 3
                                     (CPU1_ts->iowait_sleeptime is not updated)
                                     CPU1_ts->idle_entrytime = now = 6
                                   CPU1_ts->idle_active = 0

// get_cpu_iowait_time_us(1, NULL)
[ CPU1_ts->idle_active == 0, CPU1_rq->nr_iowait == 0         ]
[ CPU1_ts->iowait_sleeptime = 4, CPU1_ts->idle_entrytime = 6 ]
iowait = CPU1_ts->iowait_sleeptime = 4
(iowait decreased from 6 to 4)


> Another thing while at it. It seems that an update done from drivers/cpufreq/cpufreq_governor.c
> (calling get_cpu_iowait_time_us() -> update_ts_time_stats()) can randomly race with a CPU
> entering/exiting idle. I have no idea why drivers/cpufreq/cpufreq_governor.c does the update
> itself. It can just compute the delta like any reader. May be we could remove that and only
> ever call update_ts_time_stats() from the CPU that exit idle.
>
> What do you think?

I am all for it. We just need to make sure that CPU governors
can cope with non-monotonic idle and iowait times. I'll take
a closer look at the code but I wouldn't mind if Arjan (CCed)
beat me at that.

Thanks,
Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ