[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110803102858.GB2790@hpt.nay.redhat.com>
Date: Wed, 3 Aug 2011 18:28:59 +0800
From: Han Pingtian <phan@...hat.com>
To: linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: Arnaldo Carvalho de Melo <acme@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, jolsa@...hat.com
Subject: perf complains losting events
Hi,
I find there is a comment about losting events:
/*
* The kernel collects the number of events it couldn't send in a stretch and
* when possible sends this number in a PERF_RECORD_LOST event. The number of
* such "chunks" of lost events is stored in .nr_events[PERF_EVENT_LOST] while
* total_lost tells exactly how many events the kernel in fact lost, i.e. it is
* the sum of all struct lost_event.lost fields reported.
*
* The total_period is needed because by default auto-freq is used, so
* multipling nr_events[PERF_EVENT_SAMPLE] by a frequency isn't possible to get
* the total number of low level events, it is necessary to to sum all struct
* sample_event.period and stash the result in total_period.
*/
So my question is, whether the losting of events is a problem?
I have saw it many times:
[root@...dl580g7-01 perf]# ./perf kmem record sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 21.789 MB perf.data (~951977 samples)
]
Processed 0 events and LOST 76148!
Check IO/CPU overload!
[root@...dl580g7-01 perf]# ./perf kmem stat
Processed 0 events and LOST 76148!
Check IO/CPU overload!
SUMMARY
=======
Total bytes requested: 5725028
Total bytes allocated: 6291512
Total bytes wasted on internal fragmentation: 566484
Internal fragmentation: 9.003941%
Cross CPU allocations: 28/84295
--
Han Pingtian
Quality Engineer
hpt @ #kernel-qe
Red Hat, Inc
Freedom ... courage ... Commitment ... ACCOUNTABILITY
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists