lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Sep 2020 20:41:59 +0800
From:   Quanyang Wang <quanyang.wang@...driver.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Leo Yan <leo.yan@...aro.org>, Will Deacon <will@...nel.org>,
        a.darwish@...utronix.de,
        Daniel Lezcano <daniel.lezcano@...aro.org>,
        Paul Cercueil <paul@...pouillou.net>,
        Randy Dunlap <rdunlap@...radead.org>,
        ben.dooks@...ethink.co.uk, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH] time/sched_clock: mark sched_clock_read_begin as notrace

Hi Peter,

On 9/28/20 6:58 PM, Peter Zijlstra wrote:
> On Mon, Sep 28, 2020 at 06:49:52PM +0800, quanyang.wang@...driver.com wrote:
>> From: Quanyang Wang <quanyang.wang@...driver.com>
>>
>> Since sched_clock_read_begin is called by notrace function sched_clock,
>> it shouldn't be traceable either, or else __ftrace_graph_caller will
>> run into a dead loop on the path (arm for instance):
>>
>>    ftrace_graph_caller
>>      prepare_ftrace_return
>>        function_graph_enter
>>          ftrace_push_return_trace
>>            trace_clock_local
>>              sched_clock
>>                sched_clock_read_begin
>>
>> Fixes: 1b86abc1c645 ("sched_clock: Expose struct clock_read_data")
>> Signed-off-by: Quanyang Wang <quanyang.wang@...driver.com>
>> ---
>>   kernel/time/sched_clock.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
>> index 1c03eec6ca9b..58459e1359d7 100644
>> --- a/kernel/time/sched_clock.c
>> +++ b/kernel/time/sched_clock.c
>> @@ -68,7 +68,7 @@ static inline u64 notrace cyc_to_ns(u64 cyc, u32 mult, u32 shift)
>>   	return (cyc * mult) >> shift;
>>   }
>>   
>> -struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
>> +notrace struct clock_read_data *sched_clock_read_begin(unsigned int *seq)
>>   {
>>   	*seq = raw_read_seqcount_latch(&cd.seq);
>>   	return cd.read_data + (*seq & 1);
> At the very least sched_clock_read_retry() should also be marked such.

In fact, the sched_clock_read_retry is treated as a "inline" function, so

it doesn't trigger theĀ  dead loop. But for safe, add notrace to it is 
better.

I will send a V2 patch.

Thanks,

Quanyang


>
> But Steve, how come x86 works? Our sched_clock() doesn't have notrace on
> at all.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ