lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 14 Mar 2011 21:13:31 -0700 From: David Sharp <dhsharp@...gle.com> To: Steven Rostedt <rostedt@...dmis.org> Cc: Slava Pestov <slavapestov@...gle.com>, linux-kernel@...r.kernel.org, mrubin@...gle.com Subject: Re: [PATCH] ftrace: add a new 'tail drops' counter for overflow events On Mon, Mar 14, 2011 at 7:39 PM, Steven Rostedt <rostedt@...dmis.org> wrote: > On Mon, 2011-03-14 at 15:53 -0700, Slava Pestov wrote: >> The existing 'overrun' counter is incremented when the ring >> buffer wraps around, with overflow on (the default). We wanted >> a way to count requests lost from the buffer filling up with >> overflow off, too. I decided to add a new counter instead >> of retro-fitting the existing one because it seems like a >> different statistic to count conceptually, and also because >> of how the code was structured. > > So this is when we are in producer/consumer mode and the ring buffer > fills up and events are dropped. > > For this we could just add a new ring buffer type. We could use the > RINGBUF_TYPE_TIME_STAMP as and call it RINGBUF_TYPE_LOST_EVENTS instead. > I never implemented the TIME_STAMP as I never found a need to ;) > > As we currently have a TIME_EXTEND that is still relative from the last > event but has a total of 59 bits for time. That being nanoseconds we can > handle events that are 18 years apart. That far apart and never being > read. > > The LOST_EVENTS could store the number of events lost when it starts > reading again. This way raw readers will know that events were lost and > how many. s/reading/writing ? ie, the next time enough space is available, it would first write a LOST_EVENTS event before returning space for the next event? Regardless of whether or not you want to put this info in the trace itself (I agree it would be very useful to know where events were dropped), I think it's useful for a user to be able to quickly see the total number of events that were lost without grepping the trace. > > -- Steve > > >> >> Signed-Off-By: Slava Pestov <slavapestov@...gle.com> > > > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists