lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZesqVChUizIBmkvK@intel.com>
Date: Fri, 8 Mar 2024 10:10:12 -0500
From: Rodrigo Vivi <rodrigo.vivi@...el.com>
To: Lucas De Marchi <lucas.demarchi@...el.com>
CC: "Souza, Jose" <jose.souza@...el.com>, "intel-xe@...ts.freedesktop.org"
	<intel-xe@...ts.freedesktop.org>, "quic_mojha@...cinc.com"
	<quic_mojha@...cinc.com>, "johannes@...solutions.net"
	<johannes@...solutions.net>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, "Cavitt, Jonathan"
	<jonathan.cavitt@...el.com>
Subject: Re: [PATCH v2 2/4] devcoredump: Add dev_coredumpm_timeout()

On Fri, Mar 08, 2024 at 08:53:24AM -0600, Lucas De Marchi wrote:
> On Tue, Mar 05, 2024 at 03:38:58PM +0000, Jose Souza wrote:
> > On Tue, 2024-03-05 at 09:22 -0600, Lucas De Marchi wrote:
> > > On Tue, Mar 05, 2024 at 02:21:45PM +0000, Jose Souza wrote:
> > > > On Mon, 2024-03-04 at 17:55 -0600, Lucas De Marchi wrote:
> > > > > On Mon, Mar 04, 2024 at 02:29:03PM +0000, Jose Souza wrote:
> > > > > > On Fri, 2024-03-01 at 09:38 +0100, Johannes Berg wrote:
> > > > > > > > On Wed, 2024-02-28 at 17:56 +0000, Souza, Jose wrote:
> > > > > > > > > >
> > > > > > > > > > In my opinion, the timeout should depend on the type of device driver.
> > > > > > > > > >
> > > > > > > > > > In the case of server-class Ethernet cards, where corporate users automate most tasks, five minutes might even be considered excessive.
> > > > > > > > > >
> > > > > > > > > > For our case, GPUs, users might experience minor glitches and only search for what happened after finishing their current task (writing an email,
> > > > > > > > > > ending a gaming match, watching a YouTube video, etc.).
> > > > > > > > > > If they land on https://drm.pages.freedesktop.org/intel-docs/how-to-file-i915-bugs.html or the future Xe version of that page, following the
> > > > > > > > > > instructions alone may take inexperienced Linux users more than five minutes.
> > > > > > > >
> > > > > > > > That's all not wrong, but I don't see why you wouldn't automate this
> > > > > > > > even on end user machines? I feel you're boxing the problem in by
> > > > > > > > wanting to solve it entirely in the kernel?
> > > > > >
> > > > > > The other part of the stack that we provide are the libraries implementing Vulkan and OpenGL APIs, I don't think we could ship scripts that needs
> > > > > > elevated privileges to read and store coredump.
> > > > >
> > > > > it's still a very valid point though. Why are we doing this only on
> > > > > kernel side or mesa side rather than doing it in the proper place?  As
> > > > > Johannes said, this could very well be automated via udev rules.
> > > > > Distros automate getting the coredump already with systemd-coredump and
> > > > > the like.  Why wouldn't we do it similarly for GPU?  Handling this at
> > > > > the proper place you leave the policy there for "how long to retain the
> > > > > log", "maximum size", "rotation", etc.... outside of the kernel.
> > > >
> > > > Where and how would this udev rules be distributed?

Perhaps we can have igt tool to distribute the udev rule?

> > > 
> > > it depends on where you implement such a logic to collect gpu coredump.
> > > It might be a new project, it might be a daemon from mesa itself, it
> > > might be extending systemd-coredump.  Your decision on where to
> > > implement it will influence what's the reach it will have.
> > 
> > Don't make sense to be in Mesa, compute and media stacks also needs it.
> 
> so what? A project can't have something that is useful to other
> projects? Anyway, having it in mesa is just one of the possibilities I
> mentinoned.

There's one big case that I don't see in this discussion here but some
folks might be wondering about.

It might not be a matter of where and how we distribute the rule or how
we capture the log.

We might have cases where the dump is extremely huge: 2GB+ if we are
setting to capture all the textures and everything else.
And only the very first hang is useful for debugging.

So, in this case we perhaps want to avoid extra huge captures.
Okay, most of that could be scripted out with udev. If the dump
is huge don't write 1 back to data so you don't keep capturing
more, but then after 5 minutes we will have more huge captures
that we could likely be avoided to start with if we had the option
to do so.

This would possibly be one argument for a single capture or longer
timeout before coredump removal.

But well, I believe this case only happen in debug machines by devs
so maybe not a so strong case in the end of the day.

> 
> > 
> > > 
> > > > There is portable way to do that for distros that don't ship with systemd?
> > > 
> > > If you do it in one place, people who care can probably replicate to
> > > other environments.
> > 
> > But then the 5 min timeout is still problematic.
> > 
> > In my opinion we can have this automation, make it store codedump in disk, do the dump rotation... but also have a 1 hour timeout.
> > The automation can write "0" to devcoredump/data and free the dump from memory for the distros that supports this automation.
> 
> IMO it should not be treated as advanced automation, but rather the
> normal way to collect dev coredump. It's much more useful to the end user
> than documenting "oh, if you see a glitch on the screen, hurry up you
> have X min to look at file /path/to/log to get it from the kernel. And
> btw the glitch couldn be something else that does not generate a
> coredump, so if you don't have it, it's normal)".
> 
> It's not up to me to decide though. If maintainers think it's ok due to
> be a small change with no dire consequences, then fine.
> 
> Lucas De Marchi
> 
> > 
> > > 
> > > Lucas De Marchi
> > > 
> > > >
> > > > >
> > > > > For the purposes of reporting a bug, wouldn't it be better to instruct
> > > > > users to get the log that was saved to disk so they don't risk losing
> > > > > it? I view the timeout more as a "protection" from the kernel side to
> > > > > not waste memory if the complete stack is not in place. It shoudln't
> > > > > be viewed as a timeout for how long the *user* will take to get the log
> > > > > and create bug reports.
> > > > >
> > > > > Lucas De Marchi
> > > > >
> > > > > >
> > > > > > > >
> > > > > > > > > > I have set the timeout to one hour in the Xe driver, but this could increase if we start receiving user complaints.
> > > > > > > >
> > > > > > > > At an hour now, people will probably start arguing that "indefinitely"
> > > > > > > > is about right? But at that point you're probably back to persisting
> > > > > > > > them on disk anyway? Or maybe glitches happen during logout/shutdown ...
> > > > > >
> > > > > > i915 driver don't use coredump and it persist the error dump in memory until user frees it or reboot it and we got no complains.
> > > > > >
> > > > > > > >
> > > > > > > > Anyway, I don't want to block this because I just don't care enough
> > > > > > > > about how you do things, but I think the kernel is the wrong place to
> > > > > > > > solve this problem... The intent here was to give some userspace time to
> > > > > > > > grab it (and yes for that 5 minutes is already way too long), not the
> > > > > > > > users. That's also part of the reason we only hold on to a single
> > > > > > > > instance, since I didn't want it to keep consuming more and more memory
> > > > > > > > for it if happens repeatedly.
> > > > > > > >
> > > > > >
> > > > > > okay so will move forward with other version applying your suggestion to make dev_coredumpm() static inline and move to the header.
> > > > > >
> > > > > > thank you for the feedback
> > > > > >
> > > > > > > > johannes
> > > > > >
> > > >
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ