[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AM8PR05MB73321BFCEC3FF0CECC2DE390E2200@AM8PR05MB7332.eurprd05.prod.outlook.com>
Date: Tue, 15 Sep 2020 10:54:35 +0000
From: Tuong Tong Lien <tuong.t.lien@...tech.com.au>
To: Eric Dumazet <eric.dumazet@...il.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"jmaloy@...hat.com" <jmaloy@...hat.com>,
"maloy@...jonn.com" <maloy@...jonn.com>,
"ying.xue@...driver.com" <ying.xue@...driver.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
CC: "tipc-discussion@...ts.sourceforge.net"
<tipc-discussion@...ts.sourceforge.net>
Subject: RE: [net] tipc: fix using smp_processor_id() in preemptible
> -----Original Message-----
> From: Eric Dumazet <eric.dumazet@...il.com>
> Sent: Wednesday, September 2, 2020 2:11 PM
> To: Tuong Tong Lien <tuong.t.lien@...tech.com.au>; Eric Dumazet <eric.dumazet@...il.com>; davem@...emloft.net;
> jmaloy@...hat.com; maloy@...jonn.com; ying.xue@...driver.com; netdev@...r.kernel.org
> Cc: tipc-discussion@...ts.sourceforge.net
> Subject: Re: [net] tipc: fix using smp_processor_id() in preemptible
>
>
>
> On 9/1/20 10:52 AM, Tuong Tong Lien wrote:
>
> > Ok, I've got your concern now. Actually when writing this code, I had the same thought as you, but decided to relax it because of the
> following reasons:
> > 1. I don't want to use any locking methods here that can lead to competition (thus affect overall performance...);
> > 2. The list is not an usual list but a fixed "ring" of persistent elements (no one will insert/remove any element after it is created);
> > 3. It does _not_ matter at all if the function calls will result in the same element, or one call points to the 1st element while another
> at the same time points to the 3rd one, etc. as long as it returns an element in the list. Also, the per-cpu pointer is _not_ required to
> exactly point to the next element, but needs to be moved on this or next time..., so just relaxing!
> > 4. Isn't a "write" to the per-cpu variable atomic?
> >
>
> I think I will give up, this code is clearly racy, and will consider TIPC as broken.
>
> Your patch only silenced syzbot report, but the bug is still there.
Hi Eric,
Sorry but could you please tell me why you think it is "racy"... and the bug is still there...? Thanks!
I agreed we could make it in some brighter ways, but for now by disabling preemption prior to the per-cpu variable access is fine enough? Also lets say even in case the code is interrupted by BH or interrupts..., we should have no issue.
BR/Tuong
>
Powered by blists - more mailing lists