lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Feb 2017 14:04:39 +0800
From:   Tan Xiaojun <tanxiaojun@...wei.com>
To:     <peterz@...radead.org>, <mingo@...hat.com>, <acme@...nel.org>,
        <alexander.shishkin@...ux.intel.com>
CC:     <linux-kernel@...r.kernel.org>
Subject: [PATCH] perf/core: Fix to check perf_cpu_time_max_percent

Use "proc_dointvec_minmax" instead of "proc_dointvec" to check the input
value from userspace.

If not, we can set a big value and some vars will overflow like
"sysctl_perf_event_sample_rate". And it will cause a lot of unexpected
problems.

When I found this problem, I was doing the perf_fuzzer test in Hisilicon
D03. And get logs below(from console):

**********************************************************
[   90.880860] perf: Dynamic interrupt throttling disabled, can hang your system!
[   90.896873] perf: Dynamic interrupt throttling disabled, can hang your system!
[   91.884088] perf: Dynamic interrupt throttling disabled, can hang your system!
[   96.466762] perf: interrupt took too long (46 > 1), lowering kernel.perf_event_max_sample_rate to 175250
[   96.476228] perf: interrupt took too long (68 > 57), lowering kernel.perf_event_max_sample_rate to 117500
[   96.485774] perf: interrupt took too long (97 > 85), lowering kernel.perf_event_max_sample_rate to 82500
[   96.495249] perf: interrupt took too long (145 > 121), lowering kernel.perf_event_max_sample_rate to 55000
[   96.737083] perf: interrupt took too long (194 > 181), lowering kernel.perf_event_max_sample_rate to 41250
[   96.762287] perf: interrupt took too long (281 > 242), lowering kernel.perf_event_max_sample_rate to 28250
[   96.784693] perf: interrupt took too long (369 > 351), lowering kernel.perf_event_max_sample_rate to 21500
[   98.094492] perf: Dynamic interrupt throttling disabled, can hang your system!
[   99.550597] perf: interrupt took too long (10 > 1), lowering kernel.perf_event_max_sample_rate to -383141598
[   99.576690] perf: interrupt took too long (15 > 12), lowering kernel.perf_event_max_sample_rate to -1687083664
[   99.602955] perf: interrupt took too long (21 > 18), lowering kernel.perf_event_max_sample_rate to -176834776
[   99.629749] perf: interrupt took too long (32 > 26), lowering kernel.perf_event_max_sample_rate to -544439184
[   99.656491] perf: interrupt took too long (48 > 40), lowering kernel.perf_event_max_sample_rate to -1794615388
[   99.689272] perf: interrupt took too long (66 > 60), lowering kernel.perf_event_max_sample_rate to -475090842
[  101.823705] perf: interrupt took too long (1932 > 1), lowering kernel.perf_event_max_sample_rate to 8250
[  102.368030] perf: Dynamic interrupt throttling disabled, can hang your system!
[  105.872880] perf: Dynamic interrupt throttling disabled, can hang your system!
[  138.222411] perf: interrupt took too long (449 > 1), lowering kernel.perf_event_max_sample_rate to 1858059000
[  159.219465] INFO: rcu_preempt self-detected stall on CPU
[  159.224774]  8-...: (4 GPs behind) idle=dab/140000000000002/0 softirq=18839/18840 fqs=2625
[  159.227465] INFO: rcu_preempt detected stalls on CPUs/tasks:
[  159.227470]  8-...: (4 GPs behind) idle=dab/140000000000002/0 softirq=18839/18840 fqs=2625
[  159.227471]  (detected by 5, t=5252 jiffies, g=7249, c=7248, q=314)
[  159.227476] Task dump for CPU 8:
[  159.227477] perf_fuzzer     R  running task        0 11327   4975 0x00000203
[  159.227481] Call trace:
[  159.227487] [<ffff000008082eb0>] ret_from_fork+0x0/0x50
[  159.271265]   (t=5262 jiffies g=7249 c=7248 q=314)
[  159.276055] Task dump for CPU 8:
[  159.279283] perf_fuzzer     R  running task        0 11327   4975 0x00000203
[  159.286317] Call trace:
[  159.288773] [<ffff0000080883d4>] dump_backtrace+0x0/0x240
[  159.294167] [<ffff000008088628>] show_stack+0x14/0x1c
[  159.299215] [<ffff0000080e5f84>] sched_show_task+0x130/0x174
[  159.304866] [<ffff0000080e8948>] dump_cpu_task+0x40/0x4c
[  159.310171] [<ffff00000816f944>] rcu_dump_cpu_stacks+0xa4/0xec
[  159.315996] [<ffff00000811a160>] rcu_check_callbacks+0x9d8/0xc70
[  159.321992] [<ffff00000811e0e8>] update_process_times+0x2c/0x58
[  159.327893] [<ffff00000812d1cc>] tick_sched_handle.isra.17+0x20/0x60
[  159.334234] [<ffff00000812d250>] tick_sched_timer+0x44/0x88
[  159.339799] [<ffff00000811e980>] __hrtimer_run_queues+0xe8/0x14c
[  159.345795] [<ffff00000811ef04>] hrtimer_interrupt+0x9c/0x1e0
[  159.351536] [<ffff000008793a94>] arch_timer_handler_virt+0x2c/0x38
[  159.357708] [<ffff00000810f554>] handle_percpu_devid_irq+0x78/0x12c
[  159.363967] [<ffff00000810a5ec>] generic_handle_irq+0x24/0x38
[  159.369705] [<ffff00000810ac84>] __handle_domain_irq+0x60/0xac
[  159.375526] [<ffff0000080816e4>] gic_handle_irq+0x74/0x174
[  159.381003] Exception stack(0xffff803ffff34df0 to 0xffff803ffff34f20)
[  159.387429] 4de0:                                   ffff803ffff34e20 0001000000000000
[  159.395241] 4e00: ffff803ffff34f50 ffff0000080c29b0 0000000040000145 000000000000001b
[  159.403052] 4e20: 0000000000000400 ffff000008ed3300 ffff000008ed3300 0000803ff7196000
[  159.410864] 4e40: 000000000ccccccd 0000000000000020 000dc56c20000000 0000000000000032
[  159.418676] 4e60: 000000000000002e 000000003b9aca00 000000000000000d 656b20676e697265
[  159.426487] 4e80: 7265702e6c656e72 5f746e6576655f66 ffff000008ba6000 0000803ff7196000
[  159.434299] 4ea0: 000000000042c778 0000000000001c51 0000fffff3b48c50 ffff000008d9b000
[  159.442111] 4ec0: 0000000000000000 ffff000008d9fb08 ffff000008d9b000 ffff000008bc5728
[  159.449923] 4ee0: 000000000000001b ffff000008ba1000 ffff803ffff35090 0000000000000202
[  159.457734] 4f00: ffff803f6efbe400 ffff803ffff34f50 ffff0000080c2e40 ffff803ffff34f50
[  159.465550] [<ffff0000080827f4>] el1_irq+0xb4/0x128
[  159.470422] [<ffff0000080c2e40>] irq_exit+0xd0/0x118
[  159.475382] [<ffff00000810ac88>] __handle_domain_irq+0x64/0xac
[  159.481205] [<ffff0000080816e4>] gic_handle_irq+0x74/0x174
[  159.486684] Exception stack(0xffff803f70bfbec0 to 0xffff803f70bfbff0)
[  159.493115] bec0: 000000000000136f 0000000000000000 0000000000014f4b 0000ffff8233c5f0
[  159.500927] bee0: 0000ffff8233b270 0000000000000000 0000ffff823636f0 0000000000000000
[  159.508740] bf00: 00000000000000ad 00000000ffffff80 0000000000000001 0000fffff3b48e10
[  159.516552] bf20: 0000000000000000 0000000100000000 0000000000000000 00004a51a0000000
[  159.524365] bf40: 000000000042c778 0000ffff82266c70 0000fffff3b48c50 0000000000414bd0
[  159.532176] bf60: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  159.539988] bf80: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[  159.547799] bfa0: 0000000000000000 0000fffff3b48e60 0000000000409264 0000fffff3b48e60
[  159.555614] bfc0: 0000000000410a14 0000000020000000 0000000000000000 ffffffffffffffff
[  159.563425] bfe0: 0000000000000000 0000000000000000
[  159.568296] [<ffff000008082d18>] el0_irq_naked+0x4c/0x54
[  185.429581] NMI watchdog: BUG: soft lockup - CPU#8 stuck for 22s! [perf_fuzzer:11327]
[  185.437397] Modules linked in:
[  185.440454]
[  185.441956] CPU: 8 PID: 11327 Comm: perf_fuzzer Not tainted 4.10.0-rc1-ga1d2732-dirty #5
[  185.450028] Hardware name: Huawei Taishan 2180 /BC11SPCC, BIOS 1.31 06/23/2016
[  185.457235] task: ffff803f6efbe400 task.stack: ffff803f70bf8000
[  185.463149] PC is at __do_softirq+0xc0/0x240
[  185.467416] LR is at irq_exit+0xd0/0x118
[  185.471340] pc : [<ffff0000080c29b0>] lr : [<ffff0000080c2e40>] pstate: 40000145
[  185.478719] sp : ffff803ffff34f50
[  185.482033] x29: ffff803ffff34f50 x28: ffff803f6efbe400
[  185.487339] x27: 0000000000000202 x26: ffff803ffff35090
[  185.492646] x25: ffff000008ba1000 x24: 000000000000001b
[  185.497952] x23: ffff000008bc5728 x22: ffff000008d9b000
[  185.503259] x21: ffff000008d9fb08 x20: 0000000000000000
[  185.508566] x19: ffff000008d9b000 x18: 0000fffff3b48c50
[  185.513872] x17: 0000000000001c51 x16: 000000000042c778
[  185.519180] x15: 0000803ff7196000 x14: ffff000008ba6000
[  185.524483] x13: 5f746e6576655f66 x12: 7265702e6c656e72
[  185.529790] x11: 656b20676e697265 x10: 000000000000000d
[  185.535093] x9 : 000000003b9aca00 x8 : 000000000000002e
[  185.540400] x7 : 0000000000000032 x6 : 000dc56c20000000
[  185.545707] x5 : 0000000000000020 x4 : 000000000ccccccd
[  185.551010] x3 : 0000803ff7196000 x2 : ffff000008ed3300
[  185.556313] x1 : ffff000008ed3300 x0 : 0000000000000400
[  185.561619]

**********************************************************

Signed-off-by: Tan Xiaojun <tanxiaojun@...wei.com>
---
 kernel/events/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 77a932b..53b1747 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -455,7 +455,7 @@ int perf_cpu_time_max_percent_handler(struct ctl_table *table, int write,
 				void __user *buffer, size_t *lenp,
 				loff_t *ppos)
 {
-	int ret = proc_dointvec(table, write, buffer, lenp, ppos);
+	int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
 
 	if (ret || !write)
 		return ret;
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ