Adrian Hunter
d436373a75
perf tests: Make x86 new instructions test optional at build time
...
The "x86 instruction decoder - new instructions" test takes up space but
is only really useful to developers. Make it optional at build time.
Add variable EXTRA_TESTS which must be defined in order to build perf
with the test.
Example:
Before:
$ make -C tools/perf clean >/dev/null
$ make -C tools/perf >/dev/null
Makefile.config:650: No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR
Makefile.config:1149: libpfm4 not found, disables libpfm4 support. Please install libpfm4-dev
PERF_VERSION = 6.4.rc3.gd15b8c76c964
$ readelf -SW tools/perf/perf | grep '\.rela.dyn\|.rodata\|\.data.rel.ro'
[10] .rela.dyn RELA 000000000002fcb0 02fcb0 0748b0 18 A 6 0 8
[18] .rodata PROGBITS 00000000002eb000 2eb000 6bac00 00 A 0 0 32
[25] .data.rel.ro PROGBITS 00000000009ea180 9e9180 04b540 00 WA 0 0 32
After:
$ make -C tools/perf clean >/dev/null
$ make -C tools/perf >/dev/null
Makefile.config:650: No libunwind found. Please install libunwind-dev[el] >= 1.1 and/or set LIBUNWIND_DIR
Makefile.config:1154: libpfm4 not found, disables libpfm4 support. Please install libpfm4-dev
PERF_VERSION = 6.4.rc3.g4ea9c1569ea4
$ readelf -SW tools/perf/perf | grep '\.rela.dyn\|.rodata\|\.data.rel.ro'
[10] .rela.dyn RELA 000000000002f3c8 02f3c8 036d68 18 A 6 0 8
[18] .rodata PROGBITS 00000000002ac000 2ac000 68da80 00 A 0 0 32
[25] .data.rel.ro PROGBITS 000000000097d440 97c440 022280 00 WA 0 0 32
Committer notes:
Build with 'make EXTRA_TESTS=1 -C tools/perf O=/tmp/build/perf" and
reproduced the ELF section size differences.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Ian Rogers <irogers@google.com >
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: http://lore.kernel.org/lkml/683fea7c-f5e9-fa20-f96b-f6233ed5d2a7@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-06-13 23:40:32 -03:00
Ian Rogers
ee84a3032b
perf thread: Add accessor functions for thread
...
Using accessors will make it easier to add reference count checking in
later patches.
Committer notes:
thread->nsinfo wasn't wrapped as it is used together with
nsinfo__zput(), where does a trick to set the field with a refcount
being dropped to NULL, and that doesn't work well with using
thread__nsinfo(thread), that loses the &thread->nsinfo pointer.
When refcount checking is added to 'struct thread', later in this
series, nsinfo__zput(RC_CHK_ACCESS(thread)->nsinfo) will be used to
check the thread pointer.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Brian Robbins <brianrob@linux.microsoft.com >
Cc: Changbin Du <changbin.du@huawei.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Fangrui Song <maskray@google.com >
Cc: German Gomez <german.gomez@arm.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Ivan Babrou <ivan@cloudflare.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: K Prateek Nayak <kprateek.nayak@amd.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Liam Howlett <liam.howlett@oracle.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Miguel Ojeda <ojeda@kernel.org >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Steinar H. Gunderson <sesse@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Wenyu Liu <liuwenyu7@huawei.com >
Cc: Will Deacon <will@kernel.org >
Cc: Yang Jihong <yangjihong1@huawei.com >
Cc: Ye Xingchen <ye.xingchen@zte.com.cn >
Cc: Yuan Can <yuancan@huawei.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230608232823.4027869-4-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-06-12 15:57:53 -03:00
Namhyung Kim
b541a91793
perf annotate: Remove x86 instructions with suffix
...
Now the suffix is handled in the general code. Let's get rid of them.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com >
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org >
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20230524205054.3087004-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-06-09 10:56:05 -03:00
Tiezhu Yang
49f3806d89
perf tools: Declare syscalltbl_*[] as const for all archs
...
syscalltbl_*[] should never be changing, let us declare it as const.
Suggested-by: Ian Rogers <irogers@google.com >
Reviewed-by: Huacai Chen <chenhuacai@loongson.cn >
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: loongarch@lists.linux.dev
Link: https://lore.kernel.org/r/1685441401-8709-2-git-send-email-yangtiezhu@loongson.cn
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-06-05 11:36:17 -03:00
Ian Rogers
7c1d862eda
perf test x86: intel-pt-test data is immutable so mark it const
...
This allows the movement of 5,808 bytes from .data to .rodata.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: K Prateek Nayak <kprateek.nayak@amd.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Masami Hiramatsu <mhiramat@kernel.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Ross Zwisler <zwisler@chromium.org >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230526183401.2326121-4-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-28 10:21:41 -03:00
Ian Rogers
b1d870a8bb
perf test x86: insn-x86 test data is immutable so mark it const
...
This allows the movement of some sizeable data arrays (168,624 bytes) to
.data.relro. Without PIE or the strings it could be moved to .rodata.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: K Prateek Nayak <kprateek.nayak@amd.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Masami Hiramatsu <mhiramat@kernel.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Paolo Bonzini <pbonzini@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Ross Zwisler <zwisler@chromium.org >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230526183401.2326121-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-28 10:21:13 -03:00
Ian Rogers
94f9eb95d9
perf pmus: Remove perf_pmus__has_hybrid
...
perf_pmus__has_hybrid was used to detect when there was >1 core PMU,
this can be achieved with perf_pmus__num_core_pmus that doesn't depend
upon is_pmu_hybrid and PMU name comparisons. When modifying the
function calls take the opportunity to improve comments,
enable/simplify tests that were previously failing for hybrid but now
pass and to simplify generic code.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-34-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:42:38 -03:00
Ian Rogers
9d6a1df9b2
perf pmus: Allow just core PMU scanning
...
Scanning all PMUs is expensive as all PMUs sysfs entries are loaded,
benchmarking shows more than 4x the cost:
```
$ perf bench internals pmu-scan -i 1000
Computing performance of sysfs PMU event scan for 1000 times
Average core PMU scanning took: 989.231 usec (+- 1.535 usec)
Average PMU scanning took: 4309.425 usec (+- 74.322 usec)
```
Add new perf_pmus__scan_core routine that scans just core
PMUs. Replace perf_pmus__scan calls with perf_pmus__scan_core when
non-core PMUs are being ignored.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-30-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:42:00 -03:00
Ian Rogers
1eaf496ed3
perf pmu: Separate pmu and pmus
...
Separate and hide the pmus list in pmus.[ch]. Move pmus functionality
out of pmu.[ch] into pmus.[ch] renaming pmus functions which were
prefixed perf_pmu__ to perf_pmus__.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-28-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:41:39 -03:00
Ian Rogers
875375ea91
perf x86 mem: minor refactor to is_mem_loads_aux_event
...
Find the PMU and then the event off of it.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-27-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:41:29 -03:00
Ian Rogers
dd64647ecb
perf x86: Iterate hybrid PMUs as core PMUs
...
Rather than iterating over a separate hybrid list, iterate all PMUs
with the hybrid ones having is_core as true.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-18-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:40:21 -03:00
Ian Rogers
7b100989b4
perf evlist: Remove __evlist__add_default
...
__evlist__add_default adds a cycles event to a typically empty evlist
and was extended for hybrid with evlist__add_default_hybrid, as more
than 1 PMU was necessary. Rather than have dedicated logic for the
cycles event, this change switches to parsing 'cycles:P' which will
handle wildcarding the PMUs appropriately for hybrid.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Dmitrii Dolgov <9erthalion6@gmail.com >
Cc: Huacai Chen <chenhuacai@kernel.org >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Ming Wang <wangming01@loongson.cn >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230527072210.2900565-14-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-27 09:39:37 -03:00
Namhyung Kim
983034cd0d
perf annotate: Handle "decq", "incq", "testq", "tzcnt" instructions on x86
...
I found that the "decq", "incq", "testq", "tzcnt" instructions didn't
parse the operands properly. Add them to the "x86__instructions" table
to fix the issue.
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20230511062725.514752-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-15 17:50:01 -03:00
Ian Rogers
5136e43c61
perf parse-events: Don't reorder atom cpu events
...
On hybrid systems the topdown events don't share a fixed counter on
the atom core, so they don't require the sorting the perf metric
supporting PMUs do.
Signed-off-by: Ian Rogers <irogers@google.com >
Tested-by: Kan Liang <kan.liang@linux.intel.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ahmad Yasin <ahmad.yasin@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Edward Baker <edward.baker@intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Samantha Alt <samantha.alt@intel.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Weilin Wang <weilin.wang@intel.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230502223851.2234828-38-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-15 09:12:14 -03:00
Ian Rogers
68911aef3d
perf test x86 hybrid: Add hybrid extended type checks
...
Assert hybrid extended types are as expected.
Signed-off-by: Ian Rogers <irogers@google.com >
Tested-by: Kan Liang <kan.liang@linux.intel.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ahmad Yasin <ahmad.yasin@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Edward Baker <edward.baker@intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Samantha Alt <samantha.alt@intel.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Weilin Wang <weilin.wang@intel.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230502223851.2234828-23-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-15 09:12:13 -03:00
Ian Rogers
8d8632887d
perf test x86 hybrid: Update test expectations
...
Don't assume evlist order. Switch to a loop rather than depend on
evlist order for raw events test.
Update hybrid event expectations. Previous values were based on
parsing legacy hardware events from sysfs, update to the correct PMU
specific legacy values.
Signed-off-by: Ian Rogers <irogers@google.com >
Tested-by: Kan Liang <kan.liang@linux.intel.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ahmad Yasin <ahmad.yasin@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Edward Baker <edward.baker@intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Samantha Alt <samantha.alt@intel.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Weilin Wang <weilin.wang@intel.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230502223851.2234828-22-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-15 09:12:13 -03:00
Ian Rogers
ae4aa00a1a
perf test: Move x86 hybrid tests to arch/x86
...
The tests use x86 hybrid specific PMUs.
Signed-off-by: Ian Rogers <irogers@google.com >
Tested-by: Kan Liang <kan.liang@linux.intel.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ahmad Yasin <ahmad.yasin@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Edward Baker <edward.baker@intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kang Minchul <tegongkang@gmail.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Rob Herring <robh@kernel.org >
Cc: Samantha Alt <samantha.alt@intel.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Sumanth Korikkar <sumanthk@linux.ibm.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Tiezhu Yang <yangtiezhu@loongson.cn >
Cc: Weilin Wang <weilin.wang@intel.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: Yang Jihong <yangjihong1@huawei.com >
Link: https://lore.kernel.org/r/20230502223851.2234828-21-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-05-15 09:12:13 -03:00
Ravi Bangoria
78075d9475
perf test: Add selftest to test IBS invocation via core pmu events
...
IBS pmu can be invoked via fixed set of core pmu events with 'precise_ip'
set to 1. Add a simple event open test for all these events.
Without kernel fix:
$ sudo ./perf test -vv 76
76: AMD IBS via core pmu :
--- start ---
test child forked, pid 6553
Using CPUID AuthenticAMD-25-1-1
type: 0x0, config: 0x0, fd: 3 - Pass
type: 0x0, config: 0x1, fd: -1 - Pass
type: 0x4, config: 0x76, fd: -1 - Fail
type: 0x4, config: 0xc1, fd: -1 - Fail
type: 0x4, config: 0x12, fd: -1 - Pass
test child finished with -1
---- end ----
AMD IBS via core pmu: FAILED!
With kernel fix:
$ sudo ./perf test -vv 76
76: AMD IBS via core pmu :
--- start ---
test child forked, pid 7526
Using CPUID AuthenticAMD-25-1-1
type: 0x0, config: 0x0, fd: 3 - Pass
type: 0x0, config: 0x1, fd: -1 - Pass
type: 0x4, config: 0x76, fd: 3 - Pass
type: 0x4, config: 0xc1, fd: 3 - Pass
type: 0x4, config: 0x12, fd: -1 - Pass
test child finished with 0
---- end ----
AMD IBS via core pmu: Ok
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com >
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org >
Link: https://lkml.kernel.org/r/20230504110003.2548-5-ravi.bangoria@amd.com
2023-05-08 10:58:31 +02:00
James Clark
6593f019c2
perf tools: Add util function for overriding user set config values
...
There is some duplicated code to only override config values if they
haven't already been set by the user so make a util function for this.
Signed-off-by: James Clark <james.clark@arm.com >
Acked-by: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Denis Nikitin <denik@google.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mathieu Poirier <mathieu.poirier@linaro.org >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Will Deacon <will@kernel.org >
Cc: Yang Shi <shy828301@gmail.com >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230424134748.228137-3-james.clark@arm.com
[ Moved evsel__set_config_if_unset() to util/pmu.c to avoid dragging stuff into the python binding ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-24 14:41:51 -03:00
Arnaldo Carvalho de Melo
ce1d3bc273
perf evsel: Introduce evsel__name_is() method to check if the evsel name is equal to a given string
...
This makes the logic a bit clear by avoiding the !strcmp() pattern and
also a way to intercept the pointer if we need to do extra validation on
it or to do lazy setting of evsel->name via evsel__name(evsel).
Reviewed-by: "Liang, Kan" <kan.liang@linux.intel.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Link: https://lore.kernel.org/lkml/ZEGLM8VehJbS0gP2@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-24 14:28:11 -03:00
Arnaldo Carvalho de Melo
313b4c1ccd
perf x86 iostat: Use zfree() to reduce chances of use after free
...
Do defensive programming by using zfree() to initialize freed pointers
to NULL, so that eventual use after free result in a NULL pointer deref
instead of more subtle behaviour.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-12 09:59:19 -03:00
Ian Rogers
2a6e5e8a2a
perf map: Add accessors for ->pgoff and ->reloc
...
Later changes will add reference count checking for 'struct map'. Add
accessors so that the reference count check is only necessary in one
place.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Cc: Darren Hart <dvhart@infradead.org >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: Dmitriy Vyukov <dvyukov@google.com >
Cc: Eric Dumazet <edumazet@google.com >
Cc: German Gomez <german.gomez@arm.com >
Cc: Hao Luo <haoluo@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Masami Hiramatsu <mhiramat@kernel.org >
Cc: Miaoqian Lin <linmq006@gmail.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Riccardo Mancini <rickyman7@gmail.com >
Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com >
Cc: Song Liu <song@kernel.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Stephen Brennan <stephen.s.brennan@oracle.com >
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Yury Norov <yury.norov@gmail.com >
Link: https://lore.kernel.org/r/20230404205954.2245628-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-06 22:12:40 -03:00
Ian Rogers
e5116f46d4
perf map: Add accessor for start and end
...
Later changes will add reference count checking for struct map, start
and end are frequently accessed variables. Add an accessor so that the
reference count check is only necessary in one place.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Cc: Darren Hart <dvhart@infradead.org >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: Dmitriy Vyukov <dvyukov@google.com >
Cc: Eric Dumazet <edumazet@google.com >
Cc: German Gomez <german.gomez@arm.com >
Cc: Hao Luo <haoluo@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Masami Hiramatsu <mhiramat@kernel.org >
Cc: Miaoqian Lin <linmq006@gmail.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Riccardo Mancini <rickyman7@gmail.com >
Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com >
Cc: Song Liu <song@kernel.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Stephen Brennan <stephen.s.brennan@oracle.com >
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Yury Norov <yury.norov@gmail.com >
Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-04 16:54:11 -03:00
Ian Rogers
ff583dc43d
perf maps: Remove rb_node from struct map
...
struct map is reference counted, having it also be a node in an
red-black tree complicates the reference counting. Switch to having a
map_rb_node which is a red-block tree node but points at the reference
counted struct map. This reference is responsible for a single reference
count.
Committer notes:
Fixed up tools/perf/util/unwind-libunwind-local.c to use map_rb_node as
well.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Alexey Bayduraev <alexey.v.bayduraev@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Andrew Morton <akpm@linux-foundation.org >
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com >
Cc: Darren Hart <dvhart@infradead.org >
Cc: Davidlohr Bueso <dave@stgolabs.net >
Cc: Dmitriy Vyukov <dvyukov@google.com >
Cc: Eric Dumazet <edumazet@google.com >
Cc: German Gomez <german.gomez@arm.com >
Cc: Hao Luo <haoluo@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Madhavan Srinivasan <maddy@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Masami Hiramatsu <mhiramat@kernel.org >
Cc: Miaoqian Lin <linmq006@gmail.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Riccardo Mancini <rickyman7@gmail.com >
Cc: Shunsuke Nakamura <nakamura.shun@fujitsu.com >
Cc: Song Liu <song@kernel.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: Stephen Brennan <stephen.s.brennan@oracle.com >
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: Thomas Richter <tmricht@linux.ibm.com >
Cc: Yury Norov <yury.norov@gmail.com >
Link: https://lore.kernel.org/r/20230320212248.1175731-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-04 14:06:27 -03:00
Namhyung Kim
98b7ce0ed8
perf intel-pt: Use perf_pmu__scan_file_at() if possible
...
Intel-PT calls perf_pmu__scan_file() a lot, let's use relative address
when it accesses multiple files at one place.
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Acked-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20230331202949.810326-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-04 13:23:59 -03:00
Namhyung Kim
463786658d
perf pmu: Use relative path in setup_pmu_alias_list()
...
Likewise, x86 needs to traverse the PMU list to build alias.
Let's use the new helpers to use relative paths.
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20230331202949.810326-2-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-04-04 13:23:59 -03:00
Adrian Hunter
052072f69f
perf intel-pt: Add support for new branch instructions ERETS and ERETU
...
Intel Flexible Return and Event Delivery (FRED) adds instructions ERETS
(return to supervisor) and ERETU (return to user). Intel PT instruction
decoder needs to know about these instructions because they are
branch instructions. Similar to IRET instructions, when the decoder
encounters one of these instructions it will match it to a TIP (target
instruction pointer) packet that informs what the branch destination is.
The existing "x86 instruction decoder - new instructions" test can be
used to test the result e.g.
$ perf test -v ins |& grep eret
Decoded ok: f2 0f 01 ca erets
Decoded ok: f3 0f 01 ca eretu
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Link: https://lore.kernel.org/r/20230320183517.15099-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-03-20 19:25:40 -03:00
Leo Yan
2d31e0bff2
perf kvm: Use macro to replace variable 'decode_str_len'
...
The variable 'decode_str_len' defines the string length for KVM event
name and every arch defines its own values.
This introduces complexity that the variable definition are spreading in
multiple source files under arch folder. This patch refactors code to
use a macro KVM_EVENT_NAME_LEN to define event name length and thus
remove the definitions in arch files.
Signed-off-by: Leo Yan <leo.yan@linaro.org >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230315145112.186603-2-leo.yan@linaro.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-03-15 16:43:34 -03:00
Ian Rogers
347c2f0a09
perf parse-events: Sort and group parsed events
...
This change is intended to be a no-op for most current cases, the
default sort order is the order the events were parsed. Where it
varies is in how groups are handled. Previously an uncore and core
event that are grouped would most often cause the group to be removed:
```
$ perf stat -e '{instructions,uncore_imc_free_running_0/data_total/}' -a sleep 1
WARNING: grouped events cpus do not match, disabling group:
anon group { instructions, uncore_imc_free_running_0/data_total/ }
...
```
However, when wildcards are used the events should be re-sorted and
re-grouped in parse_events__set_leader, but this currently fails for
simple examples:
```
$ perf stat -e '{uncore_imc_free_running/data_read/,uncore_imc_free_running/data_write/}' -a sleep 1
Performance counter stats for 'system wide':
<not counted> MiB uncore_imc_free_running/data_read/
<not counted> MiB uncore_imc_free_running/data_write/
1.000996992 seconds time elapsed
```
A futher failure mode, fixed in this patch, is to force topdown events
into a group.
This change moves sorting the evsels in the evlist after parsing. It
requires parsing to set up groups. First the evsels are sorted
respecting the existing groupings and parse order, but also reordering
to ensure evsels of the same PMU and group appear together. So that
software and aux events respect groups, their pmu_name is taken from
the group leader. The sorting is done with list_sort removing a memory
allocation.
After sorting a pass is done to correct the group leaders and for
topdown events ensuring they have a group leader.
This fixes the problems seen before:
```
$ perf stat -e '{uncore_imc_free_running/data_read/,uncore_imc_free_running/data_write/}' -a sleep 1
Performance counter stats for 'system wide':
727.42 MiB uncore_imc_free_running/data_read/
81.84 MiB uncore_imc_free_running/data_write/
1.000948615 seconds time elapsed
```
As well as making groups not fail for cases like:
```
$ perf stat -e '{imc_free_running_0/data_total/,imc_free_running_1/data_total/}' -a sleep 1
Performance counter stats for 'system wide':
256.47 MiB imc_free_running_0/data_total/
256.48 MiB imc_free_running_1/data_total/
1.001165442 seconds time elapsed
```
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Kim Phillips <kim.phillips@amd.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Steinar H. Gunderson <sesse@google.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Link: https://lore.kernel.org/r/20230312021543.3060328-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-03-13 17:42:26 -03:00
Ian Rogers
3c7b84d419
perf pmu: Earlier PMU auxtrace initialization
...
This allows event parsing to use the evsel__is_aux_event function,
which is important when determining event grouping.
Suggested-by: Adrian Hunter <adrian.hunter@intel.com >
Signed-off-by: Ian Rogers <irogers@google.com >
Acked-by: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Kim Phillips <kim.phillips@amd.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Steinar H. Gunderson <sesse@google.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Link: https://lore.kernel.org/r/20230312021543.3060328-2-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-03-13 15:12:19 -03:00
Ian Rogers
1647cd5b88
perf stat: Implement --topdown using json metrics
...
Request the topdown metric group of a level with the metrics in the
group 'TopdownL<level>' rather than through specific events. As more
topdown levels are supported this way, such as 6 on Intel Ice Lake,
default to just showing the level 1 metrics. This can be overridden
using '--td-level'. Rather than determine the maximum topdown level
from sysfs, use the metric group names. Remove some now unused topdown
code.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com >
Cc: Andrii Nakryiko <andrii@kernel.org >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Eduard Zingerman <eddyz87@gmail.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-stm32@st-md-mailman.stormreply.com
Link: https://lore.kernel.org/r/20230219092848.639226-41-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-02-19 08:07:24 -03:00
Ian Rogers
94b1a603fc
perf stat: Add TopdownL1 metric as a default if present
...
When there are no events and on Intel, the topdown events will be
added by default if present. To display the metrics associated with
these request special handling in stat-shadow.c. To more easily update
these metrics use the json metric version via the TopdownL1
group. This makes the handling less platform specific.
Modify the metricgroup__has_metric code to also cover metric groups.
Signed-off-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com >
Cc: Andrii Nakryiko <andrii@kernel.org >
Cc: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Caleb Biggers <caleb.biggers@intel.com >
Cc: Eduard Zingerman <eddyz87@gmail.com >
Cc: Florian Fischer <florian.fischer@muhq.space >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: James Clark <james.clark@arm.com >
Cc: Jing Zhang <renyu.zj@linux.alibaba.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Perry Taylor <perry.taylor@intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Ravi Bangoria <ravi.bangoria@amd.com >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Sean Christopherson <seanjc@google.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Suzuki Poulouse <suzuki.poulose@arm.com >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-stm32@st-md-mailman.stormreply.com
Link: https://lore.kernel.org/r/20230219092848.639226-40-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-02-19 08:07:19 -03:00
Kan Liang
957ed139d7
perf event x86: Add retire_lat when synthesizing PERF_SAMPLE_WEIGHT_STRUCT
...
In arch_perf_synthesize_sample_weight(), the retire_lat was mistakenly
missed, add it.
perf test -v "x86 sample parsing"
74: x86 Sample parsing :
--- start ---
test child forked, pid 72526
Samples differ at 'retire_lat'
parsing failed for sample_type 0x1000000
test child finished with -1
---- end ----
x86 Sample parsing: FAILED!
Reported-by: Arnaldo Carvalho de Melo <acme@kernel.org >
Signed-off-by: Kan Liang <kan.liang@linux.intel.com >
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Link: https://lore.kernel.org/r/20230206162100.3329395-1-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-02-06 14:56:22 -03:00
Kan Liang
e65f91b20c
perf test x86: Support the retire_lat (Retire Latency) sample_type check
...
Add test for the new field for Retire Latency in the X86 specific test.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com >
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Link: https://lore.kernel.org/r/20230202192209.1795329-3-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-02-06 11:53:07 -03:00
Kan Liang
d7d213e04c
perf report: Support Retire Latency
...
The Retire Latency field is added in the var3_w of the
PERF_SAMPLE_WEIGHT_STRUCT. The Retire Latency reports pipeline stall of
this instruction compared to the previous instruction in cycles. That's
quite useful to display the information with perf mem report.
The p_stage_cyc for Power is also from the var3_w. Union the p_stage_cyc
and retire_lat to share the code.
Implement X86 specific codes to display the X86 specific header.
Add a new sort key retire_lat for the Retire Latency.
Reviewed-by: Andi Kleen <ak@linux.intel.com >
Signed-off-by: Kan Liang <kan.liang@linux.intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Link: http://lore.kernel.org/lkml/20230104201349.1451191-8-kan.liang@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-02-03 17:24:02 -03:00
James Clark
f8ad6018ce
perf pmu: Remove duplication around EVENT_SOURCE_DEVICE_PATH
...
The pattern for accessing EVENT_SOURCE_DEVICE_PATH is duplicated in a
few places, so add two utility functions to cover it. Also just use
perf_pmu__scan_file() instead of pmu_type() which already does the same
thing.
No functional changes.
Reviewed-by: Leo Yan <leo.yan@linaro.org >
Signed-off-by: James Clark <james.clark@arm.com >
Acked-by: Suzuki Poulouse <suzuki.poulose@arm.com >
Tested-by: Tanmay Jagdale <tanmay@marvell.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Bharat Bhushan <bbhushan2@marvell.com >
Cc: George Cherian <gcherian@marvell.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: John Garry <john.g.garry@oracle.com >
Cc: Linu Cherian <lcherian@marvell.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Mathieu Poirier <mathieu.poirier@linaro.org >
Cc: Mike Leach <mike.leach@linaro.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Sunil Kovvuri Goutham <sgoutham@marvell.com >
Cc: Will Deacon <will@kernel.org >
Cc: coresight@lists.linaro.org
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lore.kernel.org/r/20230120143702.4035046-2-james.clark@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2023-01-22 18:17:27 -03:00
Ian Rogers
378ef0f5d9
perf build: Use libtraceevent from the system
...
Remove the LIBTRACEEVENT_DYNAMIC and LIBTRACEFS_DYNAMIC make command
line variables.
If libtraceevent isn't installed or NO_LIBTRACEEVENT=1 is passed to the
build, don't compile in libtraceevent and libtracefs support.
This also disables CONFIG_TRACE that controls "perf trace".
CONFIG_LIBTRACEEVENT is used to control enablement in Build/Makefiles,
HAVE_LIBTRACEEVENT is used in C code.
Without HAVE_LIBTRACEEVENT tracepoints are disabled and as such the
commands kmem, kwork, lock, sched and timechart are removed. The
majority of commands continue to work including "perf test".
Committer notes:
Fixed up a tools/perf/util/Build reject and added:
#include <traceevent/event-parse.h>
to tools/perf/util/scripting-engines/trace-event-perl.c.
Committer testing:
$ rpm -qi libtraceevent-devel
Name : libtraceevent-devel
Version : 1.5.3
Release : 2.fc36
Architecture: x86_64
Install Date: Mon 25 Jul 2022 03:20:19 PM -03
Group : Unspecified
Size : 27728
License : LGPLv2+ and GPLv2+
Signature : RSA/SHA256, Fri 15 Apr 2022 02:11:58 PM -03, Key ID 999f7cbf38ab71f4
Source RPM : libtraceevent-1.5.3-2.fc36.src.rpm
Build Date : Fri 15 Apr 2022 10:57:01 AM -03
Build Host : buildvm-x86-05.iad2.fedoraproject.org
Packager : Fedora Project
Vendor : Fedora Project
URL : https://git.kernel.org/pub/scm/libs/libtrace/libtraceevent.git/
Bug URL : https://bugz.fedoraproject.org/libtraceevent
Summary : Development headers of libtraceevent
Description :
Development headers of libtraceevent-libs
$
Default build:
$ ldd ~/bin/perf | grep tracee
libtraceevent.so.1 => /lib64/libtraceevent.so.1 (0x00007f1dcaf8f000)
$
# perf trace -e sched:* --max-events 10
0.000 migration/0/17 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, dest_cpu: 1)
0.005 migration/0/17 sched:sched_wake_idle_without_ipi(cpu: 1)
0.011 migration/0/17 sched:sched_switch(prev_comm: "", prev_pid: 17 (migration/0), prev_state: 1, next_comm: "", next_prio: 120)
1.173 :0/0 sched:sched_wakeup(comm: "", pid: 3138 (gnome-terminal-), prio: 120)
1.180 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 3138 (gnome-terminal-), next_prio: 120)
0.156 migration/1/21 sched:sched_migrate_task(comm: "", pid: 1603763 (perf), prio: 120, orig_cpu: 1, dest_cpu: 2)
0.160 migration/1/21 sched:sched_wake_idle_without_ipi(cpu: 2)
0.166 migration/1/21 sched:sched_switch(prev_comm: "", prev_pid: 21 (migration/1), prev_state: 1, next_comm: "", next_prio: 120)
1.183 :0/0 sched:sched_wakeup(comm: "", pid: 1602985 (kworker/u16:0-f), prio: 120, target_cpu: 1)
1.186 :0/0 sched:sched_switch(prev_comm: "", prev_prio: 120, next_comm: "", next_pid: 1602985 (kworker/u16:0-f), next_prio: 120)
#
Had to tweak tools/perf/util/setup.py to make sure the python binding
shared object links with libtraceevent if -DHAVE_LIBTRACEEVENT is
present in CFLAGS.
Building with NO_LIBTRACEEVENT=1 uncovered some more build failures:
- Make building of data-convert-bt.c to CONFIG_LIBTRACEEVENT=y
- perf-$(CONFIG_LIBTRACEEVENT) += scripts/
- bpf_kwork.o needs also to be dependent on CONFIG_LIBTRACEEVENT=y
- The python binding needed some fixups and util/trace-event.c can't be
built and linked with the python binding shared object, so remove it
in tools/perf/util/setup.py and exclude it from the list of
dependencies in the python/perf.so Makefile.perf target.
Building without libtraceevent-devel installed uncovered more build
failures:
- The python binding tools/perf/util/python.c was assuming that
traceevent/parse-events.h was always available, which was the case
when we defaulted to using the in-kernel tools/lib/traceevent/ files,
now we need to enclose it under ifdef HAVE_LIBTRACEEVENT, just like
the other parts of it that deal with tracepoints.
- We have to ifdef the rules in the Build files with
CONFIG_LIBTRACEEVENT=y to build builtin-trace.c and
tools/perf/trace/beauty/ as we only ifdef setting CONFIG_TRACE=y when
setting NO_LIBTRACEEVENT=1 in the make command line, not when we don't
detect libtraceevent-devel installed in the system. Simplification here
to avoid these two ways of disabling builtin-trace.c and not having
CONFIG_TRACE=y when libtraceevent-devel isn't installed is the clean
way.
From Athira:
<quote>
tools/perf/arch/powerpc/util/Build
-perf-y += kvm-stat.o
+perf-$(CONFIG_LIBTRACEEVENT) += kvm-stat.o
</quote>
Then, ditto for arm64 and s390, detected by container cross build tests.
- s/390 uses test__checkevent_tracepoint() that is now only available if
HAVE_LIBTRACEEVENT is defined, enclose the callsite with ifder HAVE_LIBTRACEEVENT.
Also from Athira:
<quote>
With this change, I could successfully compile in these environment:
- Without libtraceevent-devel installed
- With libtraceevent-devel installed
- With “make NO_LIBTRACEEVENT=1”
</quote>
Then, finally rename CONFIG_TRACEEVENT to CONFIG_LIBTRACEEVENT for
consistency with other libraries detected in tools/perf/.
Signed-off-by: Ian Rogers <irogers@google.com >
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com >
Tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Nick Desaulniers <ndesaulniers@google.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Cc: bpf@vger.kernel.org
Link: http://lore.kernel.org/lkml/20221205225940.3079667-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-12-14 11:16:12 -03:00
Namhyung Kim
5f334d88c2
perf stat: Pass through 'struct outstate'
...
Now most of the print functions take a pointer to the struct outstate.
We have one in the evlist__print_counters() and pass it through the
child functions.
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Acked-by: Ian Rogers <irogers@google.com >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Athira Jajeev <atrajeev@linux.vnet.ibm.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: James Clark <james.clark@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Link: https://lore.kernel.org/r/20221123180208.2068936-13-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-11-24 09:40:37 -03:00
Adrian Hunter
44a037f54b
perf intel-pt: Add hybrid CPU compatibility test
...
The kernel driver assumes hybrid CPUs will have Intel PT capabilities
that are compatible with the boot CPU. Add a test to check that is the
case.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Link: https://lore.kernel.org/r/20221104121805.5264-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-11-09 15:23:12 -03:00
Adrian Hunter
828143f8da
perf intel-pt: Redefine test_suite to allow for adding more subtests
...
In preparation for adding more Intel PT testing, redefine the test_suite
to allow for adding more subtests.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Link: https://lore.kernel.org/r/20221104121805.5264-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-11-09 15:23:12 -03:00
Adrian Hunter
5d0557c75b
perf intel-pt: Start turning intel-pt-pkt-decoder-test.c into a suite of intel-pt subtests
...
In preparation for adding more Intel PT testing, rename
intel-pt-pkt-decoder-test.c to intel-pt-test.c.
Subtests will later be added to intel-pt-test.c.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Link: https://lore.kernel.org/r/20221104121805.5264-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-11-09 15:23:12 -03:00
Arnaldo Carvalho de Melo
9823147da6
perf tools: Move 'struct perf_sample' to a separate header file to disentangle headers
...
Some places were including event.h just to get 'struct perf_sample',
move it to a separate place so that we speed up a bit the build.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-10-31 11:06:41 -03:00
Arnaldo Carvalho de Melo
6bc13cab57
perf arch x86: Add missing stdlib.h to get free() prototype
...
It was getting indirectly, out of luck, add it.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-10-27 16:37:26 -03:00
Adrian Hunter
6cef7dab3e
perf intel-pt: Fix system_wide dummy event for hybrid
...
User space tasks can migrate between CPUs, so when tracing selected CPUs,
system-wide sideband is still needed, however evlist->core.has_user_cpus
is not set in the hybrid case, so check the target cpu_list instead.
Fixes: 7d189cadbe ("perf intel-pt: Track sideband system-wide when needed")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20221012082259.22394-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-10-15 10:13:16 -03:00
Ravi Bangoria
f7b58cbdb3
perf mem/c2c: Add load store event mappings for AMD
...
The 'perf mem' and 'perf c2c' tools are wrappers around 'perf record'
with mem load/ store events. IBS tagged load/store sample provides most
of the information needed for these tools. Wire in the "ibs_op//" event
as mem-ldst event for AMD.
There are some limitations though: Only load/store micro-ops provide
mem/c2c information. Whereas, IBS does not have a way to choose a
particular type of micro-op to tag. This results in many non-LS
micro-ops being tagged which appear as N/A in the perf report. IBS,
being an uncore pmu from kernel point of view[1], does not support per
process monitoring. Thus, perf mem/c2c on AMD are currently supported in
per-cpu mode only.
Example:
$ sudo perf mem record -- -c 10000
^C[ perf record: Woken up 227 times to write data ]
[ perf record: Captured and wrote 58.760 MB perf.data (836978 samples) ]
$ sudo perf mem report -F mem,sample,snoop
Samples: 836K of event 'ibs_op//', Event count (approx.): 8418762
Memory access Samples Snoop
N/A 700620 N/A
L1 hit 126675 N/A
L2 hit 424 N/A
L3 hit 664 HitM
L3 hit 10 N/A
Local RAM hit 2 N/A
Remote RAM (1 hop) hit 8558 N/A
Remote Cache (1 hop) hit 3 N/A
Remote Cache (1 hop) hit 2 HitM
Remote Cache (2 hops) hit 10 HitM
Remote Cache (2 hops) hit 6 N/A
Uncached hit 4 N/A
$
[1]: https://lore.kernel.org/lkml/20220829113347.295-1-ravi.bangoria@amd.com
Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com >
Acked-by: Jiri Olsa <jolsa@kernel.org >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Ali Saidi <alisaidi@amazon.com >
Cc: Ananth Narayan <ananth.narayan@amd.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Borislav Petkov <bp@alien8.de >
Cc: Dave Hansen <dave.hansen@linux.intel.com >
Cc: H. Peter Anvin <hpa@zytor.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Joe Mario <jmario@redhat.com >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Kim Phillips <kim.phillips@amd.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Sandipan Das <sandipan.das@amd.com >
Cc: Santosh Shukla <santosh.shukla@amd.com >
Cc: Stephane Eranian <eranian@google.com >
Cc: Thomas Gleixner <tglx@linutronix.de >
Cc: x86@kernel.org
Link: https://lore.kernel.org/r/20221006153946.7816-6-ravi.bangoria@amd.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-10-06 16:30:06 -03:00
Namhyung Kim
182bb594e0
perf tools: Add evlist__add_sched_switch()
...
Add a help to create a system-wide sched_switch event. One merit is
that it sets the system-wide bit before adding it to evlist so that
the libperf can handle the cpu and thread maps correctly.
Reviewed-by: Adrian Hunter <adrian.hunter@intel.com >
Signed-off-by: Namhyung Kim <namhyung@kernel.org >
Cc: Ian Rogers <irogers@google.com >
Cc: Ingo Molnar <mingo@kernel.org >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kan Liang <kan.liang@linux.intel.com >
Cc: Leo Yan <leo.yan@linaro.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20221003204647.1481128-5-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-10-06 08:03:53 -03:00
Adrian Hunter
806731a946
perf tools: Do not pass NULL to parse_events()
...
Many cases do not use the extra error information provided by
parse_events and instead pass NULL as the struct parse_events_error
pointer. Add a wrapper for those cases so that the pointer is never
NULL.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com >
Cc: Ian Rogers <irogers@google.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Namhyung Kim <namhyung@kernel.org >
Link: https://lore.kernel.org/r/20220809080702.6921-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-08-10 14:30:09 -03:00
Ian Rogers
481fadfb10
perf test: Remove x86 rdpmc test
...
This test has been superseded by test_stat_user_read in:
tools/lib/perf/tests/test-evsel.c
The updated test doesn't divide-by-0 when running time of a counter is
0. It also supports ARM64.
Signed-off-by: Ian Rogers <irogers@google.com >
Acked-by: Rob Herring <robh@kernel.org >
Cc: Adrian Hunter <adrian.hunter@intel.com >
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Anshuman Khandual <anshuman.khandual@arm.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Kajol Jain <kjain@linux.ibm.com >
Cc: Mark Rutland <mark.rutland@arm.com >
Cc: Namhyung Kim <namhyung@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Cc: Stephane Eranian <eranian@google.com >
Link: http://lore.kernel.org/lkml/20220719223946.176299-3-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-08-01 09:18:12 -03:00
Zhengjun Xing
9a0b36266f
perf stat: Add topdown metrics in the default perf stat on the hybrid machine
...
Topdown metrics are missed in the default perf stat on the hybrid machine,
add Topdown metrics in default perf stat for hybrid systems.
Currently, we support the perf metrics Topdown for the p-core PMU in the
perf stat default, the perf metrics Topdown support for e-core PMU will be
implemented later separately. Refactor the code adds two x86 specific
functions. Widen the size of the event name column by 7 chars, so that all
metrics after the "#" become aligned again.
The perf metrics topdown feature is supported on the cpu_core of ADL. The
dedicated perf metrics counter and the fixed counter 3 are used for the
topdown events. Adding the topdown metrics doesn't trigger multiplexing.
Before:
# ./perf stat -a true
Performance counter stats for 'system wide':
53.70 msec cpu-clock # 25.736 CPUs utilized
80 context-switches # 1.490 K/sec
24 cpu-migrations # 446.951 /sec
52 page-faults # 968.394 /sec
2,788,555 cpu_core/cycles/ # 51.931 M/sec
851,129 cpu_atom/cycles/ # 15.851 M/sec
2,974,030 cpu_core/instructions/ # 55.385 M/sec
416,919 cpu_atom/instructions/ # 7.764 M/sec
586,136 cpu_core/branches/ # 10.916 M/sec
79,872 cpu_atom/branches/ # 1.487 M/sec
14,220 cpu_core/branch-misses/ # 264.819 K/sec
7,691 cpu_atom/branch-misses/ # 143.229 K/sec
0.002086438 seconds time elapsed
After:
# ./perf stat -a true
Performance counter stats for 'system wide':
61.39 msec cpu-clock # 24.874 CPUs utilized
76 context-switches # 1.238 K/sec
24 cpu-migrations # 390.968 /sec
52 page-faults # 847.097 /sec
2,753,695 cpu_core/cycles/ # 44.859 M/sec
903,899 cpu_atom/cycles/ # 14.725 M/sec
2,927,529 cpu_core/instructions/ # 47.690 M/sec
428,498 cpu_atom/instructions/ # 6.980 M/sec
581,299 cpu_core/branches/ # 9.470 M/sec
83,409 cpu_atom/branches/ # 1.359 M/sec
13,641 cpu_core/branch-misses/ # 222.216 K/sec
8,008 cpu_atom/branch-misses/ # 130.453 K/sec
14,761,308 cpu_core/slots/ # 240.466 M/sec
3,288,625 cpu_core/topdown-retiring/ # 22.3% retiring
1,323,323 cpu_core/topdown-bad-spec/ # 9.0% bad speculation
5,477,470 cpu_core/topdown-fe-bound/ # 37.1% frontend bound
4,679,199 cpu_core/topdown-be-bound/ # 31.7% backend bound
646,194 cpu_core/topdown-heavy-ops/ # 4.4% heavy operations # 17.9% light operations
1,244,999 cpu_core/topdown-br-mispredict/ # 8.4% branch mispredict # 0.5% machine clears
3,891,800 cpu_core/topdown-fetch-lat/ # 26.4% fetch latency # 10.7% fetch bandwidth
1,879,034 cpu_core/topdown-mem-bound/ # 12.7% memory bound # 19.0% Core bound
0.002467839 seconds time elapsed
Reviewed-by: Kan Liang <kan.liang@linux.intel.com >
Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Acked-by: Ian Rogers <irogers@google.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Alexander Shishkin <alexander.shishkin@intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20220721065706.2886112-6-zhengjun.xing@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-07-29 13:43:34 -03:00
Kan Liang
cdb204ad42
perf x86 evlist: Add default hybrid events for perf stat
...
Provide a new solution to replace the reverted commit ac2dc29edd
("perf stat: Add default hybrid events")
For the default software attrs, nothing is changed.
For the default hardware attrs, create a new evsel for each hybrid pmu.
With the new solution, adding a new default attr will not require the
special support for the hybrid platform anymore.
Also, the "--detailed" is supported on the hybrid platform
With the patch,
$ perf stat -a -ddd sleep 1
Performance counter stats for 'system wide':
32,231.06 msec cpu-clock # 32.056 CPUs utilized
529 context-switches # 16.413 /sec
32 cpu-migrations # 0.993 /sec
69 page-faults # 2.141 /sec
176,754,151 cpu_core/cycles/ # 5.484 M/sec (41.65%)
161,695,280 cpu_atom/cycles/ # 5.017 M/sec (49.92%)
48,595,992 cpu_core/instructions/ # 1.508 M/sec (49.98%)
32,363,337 cpu_atom/instructions/ # 1.004 M/sec (58.26%)
10,088,639 cpu_core/branches/ # 313.010 K/sec (58.31%)
6,390,582 cpu_atom/branches/ # 198.274 K/sec (58.26%)
846,201 cpu_core/branch-misses/ # 26.254 K/sec (66.65%)
676,477 cpu_atom/branch-misses/ # 20.988 K/sec (58.27%)
14,290,070 cpu_core/L1-dcache-loads/ # 443.363 K/sec (66.66%)
9,983,532 cpu_atom/L1-dcache-loads/ # 309.749 K/sec (58.27%)
740,725 cpu_core/L1-dcache-load-misses/ # 22.982 K/sec (66.66%)
<not supported> cpu_atom/L1-dcache-load-misses/
480,441 cpu_core/LLC-loads/ # 14.906 K/sec (66.67%)
326,570 cpu_atom/LLC-loads/ # 10.132 K/sec (58.27%)
329 cpu_core/LLC-load-misses/ # 10.208 /sec (66.68%)
0 cpu_atom/LLC-load-misses/ # 0.000 /sec (58.32%)
<not supported> cpu_core/L1-icache-loads/
21,982,491 cpu_atom/L1-icache-loads/ # 682.028 K/sec (58.43%)
4,493,189 cpu_core/L1-icache-load-misses/ # 139.406 K/sec (33.34%)
4,711,404 cpu_atom/L1-icache-load-misses/ # 146.176 K/sec (50.08%)
13,713,090 cpu_core/dTLB-loads/ # 425.462 K/sec (33.34%)
9,384,727 cpu_atom/dTLB-loads/ # 291.170 K/sec (50.08%)
157,387 cpu_core/dTLB-load-misses/ # 4.883 K/sec (33.33%)
108,328 cpu_atom/dTLB-load-misses/ # 3.361 K/sec (50.08%)
<not supported> cpu_core/iTLB-loads/
<not supported> cpu_atom/iTLB-loads/
37,655 cpu_core/iTLB-load-misses/ # 1.168 K/sec (33.32%)
61,661 cpu_atom/iTLB-load-misses/ # 1.913 K/sec (50.03%)
<not supported> cpu_core/L1-dcache-prefetches/
<not supported> cpu_atom/L1-dcache-prefetches/
<not supported> cpu_core/L1-dcache-prefetch-misses/
<not supported> cpu_atom/L1-dcache-prefetch-misses/
1.005466919 seconds time elapsed
Signed-off-by: Kan Liang <kan.liang@linux.intel.com >
Acked-by: Ian Rogers <irogers@google.com >
Acked-by: Namhyung Kim <namhyung@kernel.org >
Cc: Alexander Shishkin <alexander.shishkin@intel.com >
Cc: Andi Kleen <ak@linux.intel.com >
Cc: Ingo Molnar <mingo@redhat.com >
Cc: Jiri Olsa <jolsa@kernel.org >
Cc: Peter Zijlstra <peterz@infradead.org >
Link: https://lore.kernel.org/r/20220721065706.2886112-5-zhengjun.xing@linux.intel.com
Signed-off-by: Xing Zhengjun <zhengjun.xing@linux.intel.com >
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com >
2022-07-29 13:42:35 -03:00