-
Notifications
You must be signed in to change notification settings - Fork 1k
Profiling the Hyperledger Fabric
The Hyperledger fabric peer
includes support for profiling services over HTTP. These services are based on, and documented with, the Go
net/http/pprof package. The server recognizes all of the endpoints documented with the Go package, however they do not all "work" by default as explained further below.
The peer
profile server runs on TCP port 6060 by default, and is disabled by default. The server can be enabled by setting the peer.profile.enabled
variable to true
in the core.yaml
file, or by setting CORE_PEER_PROFILE_ENABLED=true
in the environment. The setting to change the HTTP address/port is peer.profile.listenAddress
or CORE_PEER_PROFILE_LISTENADDRESS
.
For background and reference, here are a couple of links to articles about Go profiling:
- https://blog.golang.org/profiling-go-programs
- https://software.intel.com/en-us/blogs/2014/05/10/debugging-performance-issues-in-go-programs
Assume that a peer is running locally with profiling enabled. To obtain a 30-second CPU profile one would execute
go tool pprof http://localhost:6060/debug/pprof/profile
After 30 seconds the tool will prompt for a command (if running interactively); simply type exit
to exit. By default the above command will create a file in ~/pprof/
with a name like pprof.XXX.samples.cpu.NNN.pb.gz
, where XXX
is the host/port and NNN
is a unique sequence number. Longer or shorter profiles can be obtained with the seconds
option on the request, for example
go tool pprof http://localhost:6060/debug/pprof/profile?seconds=120
for a 2-minute profile.
Profiles are analyzed with go tool pprof
as well. Execute
go tool pprof -help
for some basic help. To get started, here is a simple script that can be used to generate 3 reports from a profile file:
- A PDF version of the call graph of the most active routines
- A listing of the top routines by individual run-time
- A listing of the top routines by cumulative run-time of the routine and all of its callees
#!/bin/bash
# pprofReports <profile> <prefix>
# If called as 'pprofReports pprof.XXX.samples.cpu.NNN.pb.gz foo',
# creates 'foo.pdf', 'foo.top' and 'foo.cum'
go tool pprof -pdf $1 > $2.pdf
go tool pprof -top $1 > $2.top
go tool pprof -top -cum $1 > $2.cum
You may also be interested to read the article Cumulative Run-Time Silos in Go pprof
CPU Profiles which describes a way to interpret the cumulative run times reported by the pprof
tool.
The blocking profile, for example
go tool pprof http://localhost:6060/debug/run/pprof/block
does not work by default. Enabling the blocking profile requires a modification of the source code to call runtime.SetBlockProfileRate
. Issue #1678 has been opened to address this.
[More work/contributions needed here! I have not tried other profiling functions.]
The Hyperledger Fabric
busywork
tools include the
pprofClient
script that can help automate profile collection.
Although
pprofClient
is easiest to use with peer networks that are created by busywork, it does
support profiling arbitrary peer networks.
busywork test drivers also support automatic execution of
pprofClient
as part of a test. See for example the -pprofClient
option of the
busywork/counters/driver
script.
All trademarks and registered trademarks that appear on this page are the property of their respective owners, and are used for identification purposes only.