lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Sat, 21 Jan 2017 22:31:16 -0600
From:   wy11 <>
Subject: Questions about process statistics


Recently I noticed that in the early versions, the kernel scheduler  
suffers from such an attack,
and it has already been fixed by introducing CFS and nanosecond  
granularity accounting.

However, as to the statistics exported from kernel to /proc/stat, it  
seems that the data is updated upon every tick by  
update_process_times, and the granularity is jiffies.

In my view, for applications which utilize such statistics in  
userspace, they would still suffer from a time accounting attack, for  
it is possible for a process to run between two ticks to evade from  
being accounted (please correct me if I'm wrong). Is there any special  
reason that /proc/stat only achieves a granularity of jiffies? Is it  
possible to update the statistics every time the CPU switches to  
another process instead of upon every tick, and to read TSC for a more  
accurate time value?

Also, I noticed that the acct_rss_mem1/acct_vm_mem1 area in  
task_struct is updated upon every tick, and a malicious process is  
able to occupy a large amount of memory between two ticks. Is it  
possible to update accumulate memory every time the memory size is  
modified (for example, insert_page) by adding previous memory times  
time interval? I'd like to know if it can help to avoid time  
accounting attack and achieve more accurate statistics.

I'd be appreciate if you can answer my questions. Thanks a lot.

Best Regards,


Powered by blists - more mailing lists