Synthetic monitoring: Its meaning, types, challenges, and benefits

Have you ever said to yourself, \"I'm not completely sure if my users can access my website\"? It's frightening to say it out loud, yet more common than you think. Even if you use strong engineering practices while developing software, there are no guarantees. Your production system could be down with you being none the wiser.

Unless you actively monitor it, that is. There are many tools to monitor applications. One, in particular, called synthetic monitoring, is a great fit to answer that question. In this post, I'm going to introduce synthetic monitoring to you. We'll address the following topics:

  • What synthetic monitoring is, and why it's useful
  • Example scenarios and things to consider when writing synthetics
  • Where synthetic monitoring fits in the monitoring ecosystem

What is synthetic monitoring?

Synthetic monitoring is a technique where you run scripted interactions against your application. They simulate the behavior of a user. You execute them continuously at specified intervals. This monitoring doesn't rely on real traffic to detect issues, thus the term synthetic.

What is synthetic monitoring? Fig. 1. What is synthetic monitoring?

The individual scripts, which we call synthetic transactions, come in different shapes and flavors. You can run browser-based tests, in which you record interactions performed by a user during a core flow in your application. These are the closest way you have to reproduce user behavior in a controlled fashion.

If you're exposing a public API as part of your system, you can go down one level of abstraction and write synthetic tests for it. That way, you have more granular checks. Given that API contracts tend to be more stable than layouts, these tests are less likely to break.

Lastly, there are some more specialized synthetic test types. You can monitor parts of your infrastructure like DNS servers, SSL certificates, or even WebSocket endpoints through synthetic tests. Again, the added granularity aids debugging.

What are the types of synthetic monitoring tests?

Synthetic monitoring involves employing various testing techniques to simulate user interactions and assess performance. Here are some common types of synthetic tests:

1. Browser-based monitoring

This involves simulating real user behavior by using real browsers to interact with your website or application. This provides accurate insights into page load times, rendering performance, and JavaScript execution.

2. Headless browser monitoring

This involves executing scripts to simulate user actions without a full browser interface, making it faster and more cost-effective.

3. API monitoring

This involves evaluating API performance and behavior by simulating requests and examining the resulting data. This helps you identify issues with API endpoints, authentication, and data exchanges.

4. Real user monitoring

While not strictly a synthetic test, real user monitoring (RUM) complements synthetic monitoring by capturing real user experiences. RUM solutions collect data on performance indicators such as how quickly pages load, the delay before receiving the initial content from the server, and user interaction responsiveness.

By combining these different types of synthetic monitoring tests, you can gain a comprehensive understanding of your application's performance and identify potential issues before they impact real users.

Why is synthetic monitoring important?

Synthetic monitoring is a crucial aspect of modern application and infrastructure management, providing proactive insights into availability and performance that are difficult to achieve with other monitoring techniques. Here's a detailed explanation of its importance:

1. Proactive problem detection

  • Synthetic monitoring tools simulate user interactions with an application or service, traversing critical paths and functionalities. This allows for the identification of performance bottlenecks, errors, or availability issues before real users are impacted.
  • Synthetic tests can be configured to run continuously or at scheduled intervals (e.g., every five minutes) from various geographic locations. This ensures constant vigilance and the early detection of deviations from expected behavior.
  • Unlike RUM tools, which rely on actual user traffic, synthetic monitoring tools provide a consistent, predictable baseline for performance measurements. This is particularly valuable for applications with low or fluctuating user traffic.

2. Uptime assurance

  • Synthetic tests can verify the core functionalities of an application, ensuring that critical components like logins, checkouts, or searches are working correctly. This helps you guarantee service availability and minimize downtime.
  • For service providers, a synthetic monitoring tool is essential for tracking and reporting on service-level agreements (SLAs) related to uptime and performance. It provides objective evidence of compliance and helps you identify potential breaches.
  • By detecting availability issues early, a synthetic monitoring tool acts as an early warning system, allowing operations teams to proactively address problems before they escalate and impact a large number of users.

3. Performance benchmarking and optimization

  • Synthetic tests establish a baseline of performance for various application components and user journeys. This allows for tracking performance trends over time and identifying areas for optimization.
  • By continuously measuring performance metrics like the page load times, transaction latency, and API response times, synthetic monitoring tools can quickly detect any degradation in performance. This allows for timely intervention and prevents the user experience from suffering.
  • You can run synthetic tests from different geographic locations to assess performance variations across regions. This helps you identify latency issues specific to certain areas and optimize content delivery networks (CDNs) accordingly.

4. Third-party service issue identification

  • Modern applications often rely on various third-party services, such as payment gateways, APIs, and CDNs. Synthetic monitoring tools can be used to test the availability and performance of these dependencies, ensuring that they are not impacting the overall application experience.
  • When performance issues arise, synthetic monitoring tools help isolate the root cause by differentiating between problems within the application itself and those stemming from third-party services.

5. The business impact and ROI

  • By proactively identifying and addressing availability issues, synthetic monitoring tools minimize downtime and associated revenue loss.
  • A smooth, reliable application experience leads to increased customer satisfaction and loyalty. Proactive issue identification and resolution through synthetic monitoring contributes to a better user experience.
  • When issues do occur, synthetic monitoring tools provide valuable data and context that can help speed up the incident resolution process. This reduces the mean time to resolution and minimizes the impact on users.

How does synthetic monitoring work?

Synthetic monitoring tools work by simulating user transactions and interactions with an application or website from various locations worldwide. Here's a breakdown of the process:

1. Script creation

First, you define what you want to monitor. This involves creating scripts that simulate specific user actions (which can easily be achieved using any monitoring tool), like browsing product pages, logging in, adding items to a cart, or completing a purchase. These scripts can be simple URL checks or complex multistep transactions that mimic real user journeys. Specialized scripting tools or recorders within a synthetic monitoring platform are typically used for this purpose.

2. Location selection

Synthetic monitoring solution providers have a network of servers located around the globe. You select the locations relevant to your user base. This allows you to test performance from different geographic perspectives to identify regional latency issues or localized service disruptions.

3. Scheduled execution

Predefined scripts are then executed on a regular schedule (e.g., every five minutes, every hour, or at specific times of day) from the chosen locations. This consistent, scheduled execution allows for continuous monitoring, regardless of the actual user traffic.

4. Data collection

During each test, the synthetic monitoring tool collects a variety of performance data. This includes metrics like the following:

  • Page load times
  • Transaction durations
  • API response times
  • Resource availability
  • Error rates
  • Network latency

5. Analysis and alerting

The collected data is analyzed and compared against predefined thresholds. If performance dips below an acceptable level or an error occurs, alerts are triggered. These alerts can be sent through various channels (email, SMS, or third-party services) to notify operations teams, enabling them to react quickly and address issues before real users are impacted.

6. Reporting and visualization

Synthetic monitoring tools usually provide dashboards and reports that visualize the collected data. These reports help you understand performance trends, identify recurring issues, and track the effectiveness of optimization efforts. They also offer insights into the overall health and availability of your application or website.

7. Root cause analysis

When problems are detected, the detailed data collected by synthetic monitoring tools can be used to perform root cause analysis. This helps you pinpoint the sources of issues and facilitate faster resolution.

Synthetic monitoring vs. RUM

Feature Synthetic monitoring RUM
Purpose Proactively testing and ensuring availability and functionality Measuring and analyzing the experiences of real users and the application's performance
Methodology Simulating user interactions and transactions Collecting data from actual user interactions
Testing approach 24/7 automated tests and predefined paths and interactions Passive observation of real user traffic
Data source Simulated users in a controlled environment Real users in diverse environments and conditions
Key capabilities
  • Global monitoring, continuous testing, actual user experience data, proactive and reactive insights, and 24/7 vigilance
  • Simulating user journeys
  • Obtaining granular data
  • Faster issue resolution
  • Data-driven decisions

Key metrics to monitor in synthetic monitoring

Metrics in synthetic monitoring are essential for helping you proactively ensure application availability, performance, and functionality, ultimately providing a consistent, positive user experience. Here are a few of the major metrics:

  • Availability: Uptime, downtime, error rates, and test success rates
  • Performance: Page load times (the time to first byte, Document Object Model interactive, and fully loaded), transaction times, API response times, resource load times, and network latency
  • Functionality: Transaction successes and failures, content verification, link validation, form submission successes, third-party service availability
  • Advanced metrics: Web Vitals, custom metrics, and Apdex scores
  • Geographic metrics: Metrics broken down by the location

Challenges in synthetic monitoring

Despite its effectiveness, synthetic monitoring comes with its own set of hurdles. The creation and upkeep of scripts designed to simulate intricate user flows can be laborious and demanding, requiring constant adjustments to keep pace with application changes. It's a challenge to replicate the wide spectrum of real user actions accurately as synthetic tests may not fully capture the variability of user interactions, hardware, and network environments.

Moreover, the presence of dynamic elements, external service dependencies, and user authentication processes can significantly increase the complexity of building and managing these test scripts. Lastly, the ability to correctly analyze test results, differentiating true problems from spurious alerts, demands a certain level of skill, and expanding synthetic monitoring coverage across multiple geographies and test configurations can lead to substantial financial and operational overheads.

Real-life examples and use cases of synthetic monitoring

There are plenty of use cases for synthetic tests. Let's mention some examples:

  • User can log in: Ensuring that users have access to their accounts is crucial. A synthetic can go to the login page, log in, and check that you get to your account profile in a reasonable time frame.
  • Add items to a shopping cart: In an e-commerce site, the cart is probably the most important component. A test that simulates browsing through products and adding them to the cart covers that core flow.
  • Monitor the expiry of the main domain's SSL certificate: Even Google (https://www.bleepingcomputer.com/news/google/recent-google-voice-outage-caused-by-expired-certificates/) forgets about certificates sometimes.

What are the benefits of synthetic monitoring?

Let's focus on the value of synthetic monitoring. I aim to answer this question: what are the benefits of using this tool?

By simulating the interactions from real users, you know that your system works correctly, at least for the flows that you test. In a sense, this is akin to end-to-end tests that run continuously in production, thus ensuring you get reliable metrics about your system's availability.

A key aspect of synthetics is that they run globally. For many monitoring tools, any synthetic test that you define runs from a configurable pool of locations distributed across the globe. If your application has a global presence, it's not enough to know that things work correctly from wherever your office is. There might be issues related to one specific region. If you want to keep your customers happy, I'm sure you're keenly interested in knowing about this.

The value of synthetic monitoring Fig. 2. The value of synthetic monitoring

Synthetics also help to measure service-level agreements (SLAs). You might use SLAs as an internal tool to measure performance or provide them as a contract with your users. One way or another, it's valuable to extract SLA measurements from these monitors. With the right tool, this happens automatically without extra effort.

Meeting the SLAs with synthetic monitoring Fig. 3. Meeting the SLAs with synthetic monitoring

Key considerations for effective synthetic monitoring

When writing synthetic tests, there are a few aspects worth considering.

The practices that you use when writing end-to-end tests apply here. If you need to select elements, pick selectors that are robust and won't break if somebody slightly modifies the layout. Build assertions to prove that things work correctly, but keep them generic enough to remain resilient.

If you write tests that change the state of the system, what happens afterward? Perhaps you need to roll back purchases done by synthetics? Undo certain transactions? Some organizations eschew synthetic tests that change the state of the application. It can be limiting, but it's a valid option if you don't have the means to revert modifications.

Flakiness is the sworn enemy of any journey-based test. If your monitoring fails randomly due to, for example, unpredictable login times, developers will slowly lose trust in the monitoring until they ignore it altogether. Alert fatigue is a real risk here. Therefore, it's better to scale back the amount of synthetic testing if that makes it more reliable.

For many applications, the interesting stuff happens once you log in to your account. That means that the synthetics need to log in with some credentials. Managing credentials carelessly is a great way to end up with an embarrassing security leak. If you need to use test users, make sure to store the credentials securely and limit the permissions of these users as much as possible.

Another risk is letting the artificial traffic influence some of your KPIs. Depending on the volume of requests you get, a synthetic that runs every minute from multiple locations can skew the numbers you see in Google Analytics or any similar tool. Make sure you work together with the rest of your organization to tag the users and prevent this from happening.

Synthetic monitoring best practices

Implementing synthetic monitoring effectively requires following best practices to maximize its benefits and minimize potential pitfalls. Here's a breakdown of key best practices:

1. Define clear objectives and the scope

Focus on monitoring the most important paths users take through your application, such as logins, registrations, searches, checkouts, and other core features. Concentrate on transactions that directly impact your revenue, customer satisfaction, and overall business operations. Ensure your synthetic monitoring strategy supports broader business objectives, like maintaining SLAs, improving user experiences, and reducing downtime.

2. Design realistic, robust tests

Use realistic data and interaction patterns that closely mimic how actual users engage with your application. Incorporate wait times, think times, and varying input data. Test from different browsers and devices (desktops and mobile devices) to ensure compatibility and identify platform-specific issues. Monitor from various geographical locations to understand performance variations and ensure availability for users worldwide. Design tests to capture and report errors effectively, including specific error messages and screenshots. As your application evolves, update your synthetic tests to reflect changes in the functionality, UI, and user flows. Treat your test scripts as living documents.

3. Set meaningful thresholds and alerts

Determine typical performance levels for your key metrics (page load times, transaction times, etc.) under normal conditions. Define acceptable performance limits and configure alerts to trigger when these thresholds are breached. Avoid overly sensitive alerts that lead to alert fatigue. Configure alerts to be sent to the right teams or individuals via email, SMS, or other communication channels. Establish well-defined escalation paths for urgent alerts to facilitate timely action and problem-solving.

4. Integrate with other monitoring and DevOps tools

Use RUM data to complement synthetic insights, providing a holistic view of performance from both simulated and real user perspectives. Automate the creation of incident tickets when synthetic tests fail, streamlining the incident response process. Integrate synthetic tests into your continuous integration and continuous delivery (CI/CD) pipelines to automatically test new releases before they reach production.

5. Analyze results and optimize performance

Analyze performance trends, identify recurring issues, and track improvements over time. Leverage synthetic monitoring data to pinpoint performance bottlenecks, optimize code, and improve infrastructure. When issues occur, use synthetic monitoring data and other diagnostic tools to identify the root cause and implement effective solutions.

6. Scale and optimize your monitoring strategy

Adjust the frequency of tests based on the criticality of the monitored feature and the desired level of granularity. Avoid creating overly complex tests that are difficult to maintain and troubleshoot. Balance the need for comprehensive monitoring with the cost of running synthetic tests, especially at scale.

7. Document and communicate

Ensure that test scripts are well-documented and that procedures for creating, updating, and managing tests are clearly defined. Share the data obtained through synthetic monitoring tools and recommendations with the relevant stakeholders to drive performance improvements and ensure alignment across teams.

How synthetic monitoring fits into today’s monitoring ecosystem

Now that we've covered synthetic monitoring itself, I want to touch on one last topic: where does this kind of monitoring fit in your monitoring ecosystem?

The list of types of monitoring is constantly growing. With so many different tools, it's easy to get lost and struggle to find the right use case for synthetics. I believe the testing pyramid presents a useful analogy in this case. Synthetic monitoring sits at the top of the pyramid. It's a high-level, black box–style type of monitoring. Thus, it follows that you should cover critical user flows with it and avoid using it for low-level metrics that are better suited for other monitoring tools. For example, don't try to monitor the response time of a single server with synthetics when you can do it with bespoke infrastructure monitoring instead.

Consider another type of monitoring that sits close to the customer: real user monitoring (RUM). RUM is a tool where you follow sessions based on what every customer is doing in your application. With RUM, you can get a broader perspective on what's happening in terms of where your users come from, the time they spend on different pages, and the errors they get. If you use an integrated monitoring tool, it should mark the sessions created by synthetic tests appropriately. That way, you won't extract the wrong conclusions from the data.

In summary, synthetic monitoring is the cherry on top of your monitoring. You shouldn't replace base monitoring metrics with it, but rather enhance them with extra tests that cover core business capabilities.

RUM and Synthetic Monitoring Fig. 4. RUM and Synthetic Monitoring

Choosing the right synthetic monitoring tool

Choosing the right synthetic monitoring tool hinges on aligning the tool's capabilities with your specific needs and objectives. Start by clearly defining what you need to monitor (web apps, APIs, or mobile apps) and the key performance indicators that matter the most to your business.

Consider your required geographic coverage, alerting preferences, and need for integrations with existing systems.

Then, evaluate potential tools based on their supported test types, ease of scripting, real browser testing capabilities, data analysis features, and reporting dashboards. Their deployment model (cloud versus on-premises), ease of use, and maintenance overhead should also be factored into your decision.

Beyond features, consider the vendor's reputation, customer support, and pricing model. A thorough proof of concept is essential. Test prospective tools with real-world scenarios, evaluate their ease of use and integration, and gather feedback from stakeholders. The best tool will effectively address your unique monitoring challenges, integrate seamlessly into your existing workflows, and ultimately contribute to a positive user experience for your customers.

Set up some synthetics!

In this article, I've introduced you to synthetic monitoring, a way of simulating user behavior. As you've seen, codifying user flows as reproducible tests helps you continuously monitor your application more effectively. While browser-based interactions are the most popular facet, let's not forget about how you can test APIs, certificates, and other endpoints.

If you want to get started with it, check out a monitoring solution, which offers a powerful integrated tool that includes synthetics.

Was this article helpful?
Monitor your web transactions

Use synthetic monitoring to record various steps in transactions and enhance your end-user experience.

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us