lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Dec 2022 12:18:12 +0000
From:   Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To:     Mirsad Goran Todorovac <mirsad.todorovac@....unizg.hr>,
        srinivas pandruvada <srinivas.pandruvada@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>, jani.nikula@...ux.intel.com,
        joonas.lahtinen@...ux.intel.com,
        Rodrigo Vivi <rodrigo.vivi@...el.com>
Cc:     Thorsten Leemhuis <regressions@...mhuis.info>,
        intel-gfx@...ts.freedesktop.org
Subject: Re: LOOKS GOOD: Possible regression in drm/i915 driver: memleak


On 22/12/2022 15:21, Mirsad Goran Todorovac wrote:
> On 12/22/2022 09:04, Tvrtko Ursulin wrote:
>> On 22/12/2022 00:12, Mirsad Goran Todorovac wrote:
>>> On 20. 12. 2022. 20:34, Mirsad Todorovac wrote:
>>>
>>> As I hear no reply from Tvrtko, and there is already 1d5h uptime with 
>>> no leaks (but
>>> the kworker with memstick_check nag I couldn't bisect on the only box 
>>> that reproduced it,
>>> because something in hw was not supported in pre 4.16 kernels on the 
>>> Lenovo V530S-07ICB.
>>> Or I am doing something wrong.)
>>>
>>> However, now I can find the memstick maintainers thanks to Tvrtko's 
>>> hint.
>>>
>>> If you no longer require my service, I would close this on my behalf.
>>>
>>> I hope I did not cause too much trouble. The knowledgeable knew that 
>>> this was not a security
>>> risk, but only a bug. (30 leaks of 64 bytes each were hardly to 
>>> exhaust memory in any realistic
>>> time.)
>>>
>>> However, having some experience with software development, I always 
>>> preferred bugs reported
>>> and fixed rather than concealed and lying in wait (or worse, found 
>>> first by a motivated
>>> adversary.) Forgive me this rant, I do not live from writing kernel 
>>> drivers, this is just a
>>> pet project as of time being ...
> Hi,
>> It is not forgotten - I was trying to reach out to the original author 
>> of the fixlet which worked for you. If that fails I will take it up on 
>> myself, but need to set aside some time to get into the exact problem 
>> space before I can vouch for the fix and send it on my own.
> That's good news. Possibly with some assistance I could bisect on pre 
> 4.16 kernels with the additional drivers.

Sorry, maybe I am confused, but from where does 4.16 come?

>> In the meantime definitely thanks a lot for testing this quickly and 
>> reporting back!
> Not at all, I considered it a privilege to assist your team.
>> What will happen next is, that when either the original author or 
>> myself are ready to send out the fix as a proper patch, you will be 
>> copied on it via the "Reported-by" and possibly "Tested-by" tags. 
>> Latter is if the patch remains identical. If it changes we might 
>> kindly ask you to re-test if possible.
> 
> I've seen the published patch and it seems like the same two lines 
> change (-1/+1).
> In case of a change, I will attempt to test with the same config, setup 
> and running programs.

Yes it is the same diff so no need to re-test really.

> I may need to correct myself in regard as to security aspect of this 
> patch as addressed in 786555987207.
> 
> QUOTE:
> 
>      Currently we create a new mmap_offset for every call to
>      mmap_offset_ioctl. This exposes ourselves to an abusive client that 
> may
>      simply create new mmap_offsets ad infinitum, which will exhaust 
> physical
>      memory and the virtual address space. In addition to the exhaustion, a
>      very long linear list of mmap_offsets causes other clients using the
>      object to incur long list walks -- these long lists can also be
>      generated by simply having many clients generate their own 
> mmap_offset.
> 
> It is unobvious whether the bug that caused chrome to trigger 30 
> memleaks could be exploited by an
> abusive script to exhaust larger parts of kernel memory and possibly 
> crash the kernel?

Indeed. Attackers imagination can be pretty impressive so I'd rather 
assume it is exploitable than that it isn't. Luckily it is "just" a 
memory leak rather and information leak or worse. Hopefully we can merge 
the fix soon, as soon as a willing reviewer is found.

Regards,

Tvrtko

Powered by blists - more mailing lists