lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <521C68FC.9040705@hitachi.com>
Date:	Tue, 27 Aug 2013 17:53:16 +0900
From:	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
To:	Namhyung Kim <namhyung@...nel.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	"zhangwei(Jovi)" <jovi.zhangwei@...wei.com>,
	Namhyung Kim <namhyung.kim@....com>,
	Hyeoncheol Lee <cheol.lee@....com>,
	LKML <linux-kernel@...r.kernel.org>,
	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: Re: [PATCH 10/13] tracing/uprobes: Fetch args before reserving
 a ring buffer

(2013/08/27 17:07), Namhyung Kim wrote:
> Hi Steven,
> 
> On Thu, 22 Aug 2013 21:08:30 -0400, Steven Rostedt wrote:
>> On Fri, 23 Aug 2013 07:57:15 +0800
>> "zhangwei(Jovi)" <jovi.zhangwei@...wei.com> wrote:
>>
>>
>>>>
>>>> What about creating a per cpu buffer when uprobes are registered, and
>>>> delete them when they are finished? Basically what trace_printk() does
>>>> if it detects that there are users of trace_printk() in the kernel.
>>>> Note, it does not deallocate them when finished, as it is never
>>>> finished until reboot ;-)
>>>>
>>>> -- Steve
>>>>
>>> I also thought out this approach, but the issue is we cannot fetch user
>>> memory into per-cpu buffer, because use per-cpu buffer should under
>>> preempt disabled, and fetching user memory could sleep.
>>
>> Actually, we could create a per_cpu mutex to match the per_cpu buffers.
>> This is not unlike what we do in -rt.
>>
>> 	int cpu;
>> 	struct mutex *mutex;
>> 	void *buf;
>>
>>
>> 	/*
>> 	 * Use per cpu buffers for fastest access, but we might migrate
>> 	 * So the mutex makes sure we have sole access to it.
>> 	 */
>>
>> 	cpu = raw_smp_processor_id();
>> 	mutex = per_cpu(uprobe_cpu_mutex, cpu);
>> 	buf = per_cpu(uprobe_cpu_buffer, cpu);
>>
>> 	mutex_lock(mutex);
>> 	store_trace_args(..., buf,...);
>> 	mutex_unlock(mutex);
>>
> 
> Great!  I'll go with this approach.  Is it OK to you, Masami?

Yeah, it also seems to work. Please feel free to try it :)

Thank you,

-- 
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@...achi.com


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ