lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 Feb 2021 13:59:46 +0100
From:   Christian König <ckoenig.leichtzumerken@...il.com>
To:     NeilBrown <neilb@...e.de>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Cc:     mojha@...eaurora.org, jkosina@...e.cz, cezary.rojewski@...el.com,
        neilb@...e.com, b00073877@....edu, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] list: add more extensive double add check

Am 01.02.21 um 23:16 schrieb NeilBrown:
> On Mon, Feb 01 2021, Andy Shevchenko wrote:
>
>> On Mon, Feb 01, 2021 at 02:52:51PM +0100, Christian König wrote:
>>> Adding the same element to a linked list multiple times
>>> seems to be a rather common programming mistake. To debug
>>> those I've more than once written some code to check a
>>> linked list for duplicates.
>>>
>>> Since re-inventing the wheel over and over again is a bad
>>> idea this patch tries to add some common code which allows
>>> to check linked lists for duplicates while adding new
>>> elements.
>>>
>>> When list debugging is enabled we currently already check
>>> the previous and next element if they are identical to the
>>> new one. This patch now adds a configuration option to
>>> check N elements before and after the desired position.
>>>
>>> By default we still only test one item since testing more
>>> means quite a large CPU overhead. This can be overwritten
>>> on a per C file bases by defining DEBUG_LIST_DOUBLE_ADD
>>> before including list.h.
>> I'm not sure it is a good idea. Currently the implementation is *generic*.
>> You are customizing it w/o letting caller know.
>>
>> Create a derivative implementation and name it exlist (exclusive list) and use
>> whenever it makes sense.
>>
>> And I think if you are still pushing to modify generic one the default must be
>> 0 in order not altering current behaviour.
> I don't understand your complaint.
> The extra checks are also completely *generic*.  It can never make sense
> to add sometime to a list if it is already on the list.  All lists are
> exclusive lists.
> The code ALREADY tests if the inserted object is already present either
> side of the insert side of the insertion point.  This patch just extends
> it somewhat.

Correct, we are just checking for obvious bugs. The bigger problem is 
the usability and potentially performance impact.

In other words when you set this value to high the list_add() function 
will use so much time that the kernel thinks that the CPU is stuck. I've 
was already able to trigger this.

Would it be more acceptable if I drop the config option and only allow 
to override the check on a per C file basis?

> I myself have never had, or heard of, a bug due to double insertion so
> I'm no strongly in favour of this patch for that reason.
> But I *am* in favour of making the platform more resilient in general,
> and if others have experienced this sort of bug, then I'm in favour of
> make that easier to detect in future.

I have seen plenty of those. Especially when you implement state 
machines when a certain object needs to move from state to state 
triggered by external events.

For example it seems to be a common mistake to do a list_del_init, drop 
a lock and then assume a list_add should do it when you re-aquired the 
lock. In reality you have a very small window where a device interrupt 
could have already added the item to the list again between the locks.

Thanks,
Christian.

>
> NeilBrown
>
>
>>> A new kunit test is also added to the existing list tests
>>> which intentionally triggers the debug functionality.
>> -- 
>> With Best Regards,
>> Andy Shevchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ