CtrlB
  • CtrlB Flow
    • 🖐️Introducing CtrlB Flow
    • 🏁Getting started
    • Getting data in CtrlB Flow
    • Explore your data
    • Control your data
  • CtrlB Live Debug
    • 🖐️Introducing CtrlB Live Debug
      • What can you do with CtrlB?
      • How CtrlB works?
    • 🏁Getting started in 2 minutes
    • Plugins
      • Visual Studio Code
      • PyCharm, IntelliJ & WebStorm
    • Agents
      • Python
        • Installation
        • Configuration
        • System Requirements
        • Data Redaction
        • Troubleshoot / Release Notes
          • Version 1.0
      • NodeJs
        • Installation
        • Configuration
        • System Requirements
        • Data Redaction
        • Troubleshoot / Release Notes
          • Version 1.0
      • Java
        • Installation
        • Run with IntelliJ
        • Configuration
        • System Requirements
        • Data Redaction
        • Troubleshoot / Release Notes
      • Go
        • Installation
        • Run with Goland
        • Run with Docker
        • Configuration
        • System Requirements
        • Data Redaction
    • 🎛️Conditional Expressions
    • 🏭Deploy within your firewall
    • 🍉Benchmarking impact on your system
      • Java
      • Go
    • Benchmarking impact on your system:
Powered by GitBook
On this page
  • Results
  • Interpretation
  1. CtrlB Live Debug
  2. Benchmarking impact on your system

Go

PreviousJavaNextBenchmarking impact on your system:

Last updated 1 year ago

The first question everyone asks us is - "This is very cool. Tell me how does this impact my system's performance?"

TLDR: We have very minimal impact on your system's latency - a few milliseconds under heavy load. Moreover, if there are no tracepoints / logpoints - we add zero overheads.

We did a benchmarking of our agent's performance under the following conditions:

  1. We deployed a webserver using golang.

  2. The application was deployed on aws t2.2xlarge instance which has 8vCPUs and 32GB RAM.

  3. We used the popular tool to measure network latency.

  4. For load-balancing, we scaled the application to 4 instances. We loaded the application with 1000 requests per second until 5 seconds.

  5. Endpoint called: /

  6. 3 separate experiments were done:

    1. Agentless - running the application without heimdall agent.

    2. Passive - heimdall agent runs without any tracepoint.

    3. Active - heimdall agent runs with tracepoint on the endpoint.

Results

Agentless
Passive
Active

Average latency (ms)

0.774

0.794

2.80

50 percentile (ms)

0.770

0.797

2.65

75 percentile (ms)

1.03

1.04

2.94

90 percentile (ms)

1.17

1.19

3.21

99 percentile (ms)

1.29

1.32

6.17

Interpretation

  1. When you're using us, most of the time your application would be running in passive mode, i.e., our agent is running without any tracepoint. In such case, our application adds negligible overhead.

You can import our agent in your application without worrying about overheads.

  1. When you're debugging something and have added tracepoints, we add a few milliseconds of latency. You get options to set hitlimit and lifetime for the tracepoint so that the tracepoint lives for a very short time until you get your desired data.

🍉
wrk2