lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Oct 2019 11:37:59 +0200
From:   Joerg Vehlow <lkml@...coder.de>
To:     Steffen Klassert <steffen.klassert@...unet.com>,
        Tom Rix <trix@...hat.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     herbert@...dor.apana.org.au, davem@...emloft.net,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/1] xfrm : lock input tasklet skb queue

Hi,

I always expected this to be applied to the RT patches. That's why
I originally send my patch to to Sebastian, Thomas and Steven (I added
them again now. The website of the rt patches says patches for the
CONFIG_REEMPT_RT patchset should be send to lkml.

I hope one of the rt patch maintainers will reply here.

Jörg

Am 24.10.2019 um 12:31 schrieb Steffen Klassert:
> On Tue, Oct 22, 2019 at 05:22:04PM -0700, Tom Rix wrote:
>> On PREEMPT_RT_FULL while running netperf, a corruption
>> of the skb queue causes an oops.
>>
>> This appears to be caused by a race condition here
>>          __skb_queue_tail(&trans->queue, skb);
>>          tasklet_schedule(&trans->tasklet);
>> Where the queue is changed before the tasklet is locked by
>> tasklet_schedule.
>>
>> The fix is to use the skb queue lock.
>>
>> This is the original work of Joerg Vehlow <joerg.vehlow@...-tech.de>
>> https://lkml.org/lkml/2019/9/9/111
>>    xfrm_input: Protect queue with lock
>>
>>    During the skb_queue_splice_init the tasklet could have been preempted
>>    and __skb_queue_tail called, which led to an inconsistent queue.
>>
>> ifdefs for CONFIG_PREEMPT_RT_FULL added to reduce runtime effects
>> on the normal kernel.
> Has Herbert commented on your initial patch, please
> fix PREEMPT_RT_FULL instead. There are certainly many
> more codepaths that take such assumptions. You can not
> fix this by distributing a spin_lock_irqsave here
> and there.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ