[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20170121223116.Horde.gg1Xwf3UdJ4baVcyUARx7g1@webmail.rice.edu>
Date: Sat, 21 Jan 2017 22:31:16 -0600
From: wy11 <wy11@...e.edu>
To: linux-kernel@...r.kernel.org
Subject: Questions about process statistics
Hello,
Recently I noticed that in the early versions, the kernel scheduler
suffers from such an attack,
http://static.usenix.org/event/sec07/tech/full_papers/tsafrir/tsafrir_html/
and it has already been fixed by introducing CFS and nanosecond
granularity accounting.
However, as to the statistics exported from kernel to /proc/stat, it
seems that the data is updated upon every tick by
update_process_times, and the granularity is jiffies.
In my view, for applications which utilize such statistics in
userspace, they would still suffer from a time accounting attack, for
it is possible for a process to run between two ticks to evade from
being accounted (please correct me if I'm wrong). Is there any special
reason that /proc/stat only achieves a granularity of jiffies? Is it
possible to update the statistics every time the CPU switches to
another process instead of upon every tick, and to read TSC for a more
accurate time value?
Also, I noticed that the acct_rss_mem1/acct_vm_mem1 area in
task_struct is updated upon every tick, and a malicious process is
able to occupy a large amount of memory between two ticks. Is it
possible to update accumulate memory every time the memory size is
modified (for example, insert_page) by adding previous memory times
time interval? I'd like to know if it can help to avoid time
accounting attack and achieve more accurate statistics.
I'd be appreciate if you can answer my questions. Thanks a lot.
Best Regards,
Wenqiu
Powered by blists - more mailing lists