Optimus Information

Performance Testing and Resolution Case Study

The client is a global shipping company that serves the world’s leading oil and gas companies.

With over 6000 employees in offices worldwide and a fleet of over 150 ships, the client requires consistently reliable software support and maintenance.

The Challenge

Our client had a mission critical system that managed the accounts payable workflow in their worldwide offices. Every month, offshore AP clerks scan over 3000 invoices into the system and route them to the appropriate approvers in other offices. The system was crashing intermittently and consistently freezing and/or crashing in turn, causing 5-15 minute delays per incident. With 8 to 12 AP clerks losing 5 to 15 minutes several times throughout the day, hundreds of working hours were being lost each month.

Our client needed a partner to troubleshoot the issue, identify the cause(s), and recommend and implement a solution. Their system was quite complex; it consisted of several servers, databases, file servers, web services and scheduled activities.

Key Challenges

– Optimus did not develop or implement the system. We had to ramp up on a third-party system and understand it in-depth to troubleshoot.

– There was no clear pattern to the system failures.

– The system failures were only regularly occurring in offshore locations and not reproducible onshore.

– The system consisted of several layers of technologies: from servers, to databases, to web-services and virtualization solutions.

The Process

  • Identify problem and establish success criteria.
  • Analyze system and benchmark performance.
  • Optimize systems.
  • Deploy, test and maintain the system.

How Optimus Helped

Optimus has an ongoing relationship with the client, therefore creating a resource to troubleshoot this issue was not difficult. The project began with a sit down with the application’s business and technical owners to understand the issue and determine the acceptable success criteria.

We began looking for the cause of the issue by reproducing it and establishing benchmarks that measured improvements. To reproduce the problem, we setup a series of 7 workstations with automation scripts and bandwidth limiters to replicate the work being done offshore. By doing this we identified the precise conditions that triggered the crashes. After the problem was successfully reproduced in a test system, we systematically searched for the failure.

Ranging from the application to database server, our team found several areas to improve. Notably, large attachments were moved from the database on to an application server. We then cleaned up the server and optimized its performance levels. Then our team reconfigured the web server to handle the specific load type better.

Once the servers were cleaned up, reconfigured, and optimized for their specific loads, the system stabilized and there were no more regular crashes.

AP clerks were able to process invoices more efficiently since they were not interrupted by system failures several times during the day. Also, invoices are now processed using fewer resources.

Optimus also provided the client with an ongoing maintenance schedule that keeps the systems performing as expected.

  • Careers at Optimus Information
  • Cloud Infrastructure Services
  • Azure DevOps Services
  • Application Development Services
  • Azure Data and AI Services
  • Application Testing Services
  • Case Studies
  • Infographics
  • LinkedIn Profile

MyLoadTest Logo

How to write a performance test case

I have decided to release an early draft of this document so that others may provide feedback. Please let me know what you think.

Writing test cases for performance testing requires a different mindset to writing functional test cases. Fortunately it is not a difficult mental leap. This article should give you enough information to get you up and running.

First, lets set out some background and define some terms that are used in performance testing.

  • Test case – a test case is the same as a use case or business process. Just as with a functional test case, it outlines test steps that should be performed, and the expected result for each step.
  • Test script – a test script is a program created by a Performance tester that will perform all the steps in the test case.
  • Virtual user – a virtual user generally runs a single test script. Virtual users do not run test scripts using the Graphical User Interface (like a functional test case that has been automated with tools like WinRunner, QuickTest, QARun or Rational Robot); they simulate a real user by sending the same network traffic as a real user would. A single workstation can run multiple virtual users.
  • Scenario – a performance test scenario is a description of how a set of test scripts will be run. It outlines how many times an hour they will be run, how many users will run each test script, and when each test script will be run. The aim of a scenario is to simulate real world usage of a system.

Writing a test case for performance testing is basically writing a simple Requirements Specification for a piece of software (the test script). Just as with any specification, it should be unambiguous and as complete as possible.

Every test case will contain the steps to be performed with the application and the expected result for each step. As a performance tester will generally not know the business processes that they will be automating, a test case should provide more detail than may be included in a functional test case intended for a tester familiar with the application.

It is important that the test case describes a single path through the application. Adding conditional branches to handle varying application responses, such as error messages, will greatly increase script development time and the time taken to verify that the test script functions as expected. If a test script encounters an error that it does not expect, it will usually just stop. If the Project Manager decides that test scripts should handle errors the same way a real user would, then information should be included on how to reproduce each error condition, and additional scripting time should be included in the project plan.

The main reason a user may be presented with a different flow through the application is the input data that is used. Each test case will be executed with a large amount of input data. Defining data requirements is a critical part of planning for a performance test, and is the most common area to get wrong on a first attempt. It is very easy to forget that certain inputs will present the user with different options.

The other important data issues to identify are any data dependencies and any potential problems with concurrency. Is it important that data is used in some business functions before they are used in others? And, will data modified by virtual users cause other virtual users to fail when they try to use the same data? The test tool can partition the data used by each virtual user if these requirements can be identified. It can be difficult for a performance tester to debug test script failures with little knowledge of the application, especially if the failures only occur when multiple virtual users are running at once.

One of the most important pieces of information a performance test is designed to discover is the response time of the system under test – both at the overall business function level and at the low level of individual steps in the test case, such as the time it takes for a search to return a result set. Any test cases provided to a performance tester should clearly define the start and end points for any transaction timings that should be included in the test results.

It is important to remember that the test script is only creating the network traffic that would normally be generated by the application under test. This means that any operations that happen only on the client do not get simulated and therefore do not get included in any transaction timing points. A good example would be a client application that runs on a user’s PC, and communicates with a server. Starting the client application takes 10 seconds and logging in takes 5 seconds but, since only the login is sending network traffic to the server, the transaction timing point will only measure 5 seconds.

Operations that only happen on the client, including the time users take to enter data or spend looking at the screen is simulated with user think time – an intentional delay that is inserted into the test script. If no think time is included, virtual users will execute the steps of the test case as fast as they can, resulting in greater (and unrealistic) load on the system under test. Depending on the sophistication of the performance test tool, the user think time may be automatically excluded from the transaction timing points. Think times are generally inserted outside of any transaction timing points anyhow.

While a functional test case will be run once from start to finish, a performance test case will be run many times (iterated) by the same virtual user in a single scenario. Information on how the test steps will be iterated should be included in the test case. For example, if a test case involves a user logging in and performing a search, and the entire test case is iterated by the virtual user; then a test scenario may be generating too many logins if the real users generally stay logged into the application. A more realistic test case may have the virtual user log in once and then keep doing the same action for as long as the virtual user is run.

When a script is iterated, consideration should be given to the non-obvious details of how it is iterated. A good example would be a test script simulating users using an Internet search engine. When the test script is iterated, simulating a new search operation, should the virtual user establish a new network connection and empty their cache or should every iteration simulate the same user conducting another search?

As all performance test tools have different default behaviours, a good performance tester should clarify this type of detail with business and technical experts. Some performance test tools make these details easier to change than others. If it is not practical to emulate all the attributes of the expected system traffic with a particular tool, then it should be noted as a limitation of the performance test effort, and a technical expert should assess the likely impact.

Hopefully this article has provided some insight into the extra considerations that must be given when writing a performance test case, rather than a functional test case. As with any software specification, a performance test case may need to be refined as questions are raised by the performance tester.

Example performance test case

Related Posts

performance testing case study examples

MyLoadTest website update

performance testing case study examples

Deploying Mercury Tours on AWS

performance testing case study examples

Load testing SAP

27 comments.

' src=

Dear Stuar,

Your performace test case topic is awsome, it does help me lots. However, I am wondering if there is a standard for software performance testing??? What I mean is some thing define how a high-quality software unit sould pass a performance test, for example, reaponse time must be

' src=

Shared Run-time Setting mode is disabled for what type of Vuser?

' src=

Well done, thanks. I do plenty of work in both WR and LR.

' src=

Hi Stuar, Your explanation about writing test cases are very good. Can you please tell me how to write test cases for regression testing. As suppose there is an enhancement in the mid portion of the application and since we will be over with writing test cases how will we write the new test cases from the middle. Will it not be confusing. Please clarify this doubt.

' src=

Hi Sutar, Very interesting to go through the details. Well I have been in Testing for over 5 years but have never got an opportunity to do performance testing, but here comes 1 now. I am wondering if you could help me here. Please write in to me at [email protected] & give me your contact information, i will like to discuss a few things with you. This is kind of really urgent, hoping to talk to u soon. Thanks, Ronak

' src=

Your explanation is wonderful. But I want to know that if there is a scenario or scenario’s in which you have to run multiple scripts at one instance with the load distributed among the scripts, then how will you write the test case. Thanks

' src=

This artical helped me a lot .I have a question on formance test planning.Please give your contact details phone number or mail id.

My id is [email protected]

' src=

Your explanation is wonderful for writing test case. I would like to know that the created stress script for testing enviornment can be helpful for production environment??? If yes then how to replace the prod url with test url. I am working on the application where there are 3 environments and I have to create a script for each environment. Now, I am looking forward to use the single script for all 3 enviornments. It will be great if you can help me out for the same.

Thanks…

' src=

Your way of explaing about perfomance testing is very nice.

I want u to give more examples of other test cases like regresstion testing etc.

' src=

Hi, Hi Stuar,

Your explanation about writing test cases are very good. Can you please tell me how to write test cases for DNS load testing. Please clarify .

' src=

Your article is very interesting and informative. I find it useful to my current line of field. I was just wondering if you also could write some examples as well as articles on test case specifications against scenarios in the near future.

Good work, hoping to view more articles and examples from you…

' src=

Hi, Your explanation is good. But I would like to know more on how Performance testing is done on WEB Search engines & the standards required for the same.

' src=

I liked to ur way of presentation about the performance testin. I want You to give a good example of regression testing and functionality testing.

' src=

Thanks for showing us how the performance test cases are written. Can you send a test mail on my id as i have plenty of doubts related to Performance testing. Thanks in Advance

' src=

Could you pls guide me for some more senarios about ECOM site with respect to performance testing.

' src=

presentation is great and informative, as others i do request for some good performance test plan, especially using huge data’s, with drop down options etc.

' src=

then explanation about the performance testing was wonderfull. i was wondering how to write a code for it as java developer and as well i wanna to knon what are test cases for perforamance testing for add,insert,update,delete,restore,index,key and enriched query

' src=

Hi I need some of the real time test cases on an existing application. Can you please provide me. I would be thankful if you do so. Thanks.

' src=

if u have given the actual output it would be more clear…

' src=

We are starting to setup Load / Performance testing for our company (e-commerce) . Can I use Google Analytics to calculate 1. How many visitors are accessing our system during peak hours to calculate (no of virutual users needed for our load test)?

Do you advise using google analytics data for our load testing? Will that give quality result. Is that considered as accepted standard.

Please advice.

Thanks Vanitha

' src=

can someone help me with software testing tks

' src=

Very helpful. I’m currently studying Business Information Systems and I’m getting on really well at Systems Analysis and Design and Database Design. Do you think a career in performance testing might be suitable for me based on this?

' src=

Hi, Can you give the sample test cases for load testing (1.no of visitors the portal can handle 2. Number of logged in users performing various actions via web and via mobile. ie those who are using web navigation and API. )

' src=

Hi, I am in a confusion whether to choose Performance Testing or not, since I doesn’t have any idea about Performance Testing From past 1.5yrs I am working on manual testing. Please give an Idea whether is it suitable to choose? Is It quite easy or so difficult and do u have any scripting for Performance Testing please contact me at this mail id [email protected]

Please Reply soon its so urgent

' src=

hi can you give share sample test case for load testing i am beginner please suggest me

Comments are closed.

How To Plan, Design, And Execute Test Cases For Performance Testing

About The Author

Nikhil-Khandelwal

In today's fast-paced digital world, ensuring flawless application performance is paramount. Performance testing empowers you to proactively identify and mitigate bottlenecks before they impact user experience (UX) and revenue. This blog delves into the meticulous and rewarding process of planning, designing, and executing performance test cases, equipping you with the knowledge and tools to optimize your applications. 

Planning Your Performance Testing Journey

Planning Your Performance Testing Journey

  Clearly Define Performance Objectives:   

Establish measurable goals aligned with your application's intended use, target audience, and expected load. Quantifiable objectives like response times, throughput, and resource utilization guide your test case design and analysis. 

Understand Your System and Usage Patterns:   

Thoroughly map your application's architecture, identify critical components, and analyze historical usage data (peak hours, user distributions). This knowledge aids in pinpointing potential bottlenecks and tailoring test scenarios. 

Select the Right Tools for the Job:   

Consider factors like test type (load, stress, spike), budget, and resource limitations. Popular tools include LoadRunner, JMeter, ApacheBench, and Gatling . Vlink's performance testing experts can advise on the optimal toolset for your unique needs. 

Prepare Your Test Environment:   

Set up a dedicated testing environment to ensure accurate results, ideally mimicking your production environment. Vlink offers comprehensive testing environments to replicate real-world conditions seamlessly.

Read More: Benefits Of Outsourcing Software Testing Services for Businesses  

Designing Effective Test Cases

Designing Effective Test Cases

Identify Key User Scenarios:   

Prioritize tests that emulate real-world usage patterns like login, search, checkout, and standard API calls. Capture these scenarios in detailed test cases. 

Define Performance Benchmarks:   

Establish baseline performance measures based on historical data or industry standards—track deviations during testing to identify areas requiring improvement. 

User Load and Concurrency:   

Gradually increase the number of simulated users and concurrent requests to expose potential scalability issues. Vlink's expertise in creating realistic load-injection patterns ensures representative testing conditions. 

Consider Different Load Types:   

Include load, stress, and spike tests to understand how your application handles various load profiles. 

Read More: The Role of Automated Testing for Software Product Development  

Executing Test Cases and Analyzing Results

Executing Test Cases and Analyzing Results

  Thorough Test Execution:   

Meticulously run all designed test cases, monitoring key performance indicators (KPIs) like response times, throughput, and resource utilization. Vlink's automation capabilities streamline test execution and data collection. 

Analyze and Interpret Results:   

Compare results against established benchmarks and performance objectives. Identify bottlenecks, performance deviations, and areas for improvement. Vlink's performance testers deliver insightful reports and visualizations to aid root-cause analysis. 

Refine and Optimize:   

Iterate your test cases based on identified issues. Adjust load profiles, test scenarios, or application configuration to improve performance. Vlink provides continuous performance improvement recommendations. 

Additional Considerations:  

Security: Incorporate security considerations into your performance testing plan to ensure a comprehensive evaluation. 

Scalability: As your application and user base grow, ensure your performance testing strategy adapts to accommodate future needs. 

Collaboration: Foster open communication and cooperation among performance testing teams, developers , and business stakeholders to ensure alignment and shared goals. 

Read More: How Does AI Change Quality Assurance in Automation Testing?

plan-design-and-execute-test-cases-for-performance-testing-cta

Beyond the Basics: Advanced Performance Testing Techniques 

Now that you've grasped the fundamentals of planning, designing, and executing test cases for performance testing, let's explore some advanced techniques to enhance your testing effectiveness further.  

1. Performance Testing for Microservices and APIs:  

Shift-Left Testing: Integrate performance testing earlier in the development lifecycle for microservices and APIs, leveraging tools like Pact and API-Mocker to isolate and test individual components. 

Chaos Engineering: Introduce controlled chaos using tools like Gremlin or Chaos Monkey to assess your system's resilience to unexpected failures and ensure stability under unpredictable conditions. 

2. Performance Testing for Mobile Applications:  

Network Emulation: Use tools like Charles Proxy or Fiddler to simulate different network conditions (cellular, Wi-Fi, varying bandwidth) and analyze their impact on mobile app performance. 

Real Device Testing: Conduct performance testing on actual mobile devices to capture device-specific behavior and ensure optimal user experience across different hardware configurations. 

3. Performance Testing for Cloud-Based Applications:  

Horizontal Scaling Simulation: Evaluate how your application scales horizontally by simulating increased compute resources using cloud automation tools and monitoring resource utilization and performance metrics. 

Vertical Scaling Simulation: Assess the impact of increasing vertical resources (RAM, CPU) on application performance using cloud scaling capabilities and performance monitoring tools. 

Read More: UX Testing Demystified: A Step-by-Step Guide to Improving User Experiences  

4. Performance Testing for DevOps and Continuous Integration/Continuous Delivery (CI/CD):  

Performance Regression Testing: Integrate automated performance tests into your CI/CD pipeline to catch performance regressions early and ensure consistent performance with every code change. 

Performance Canary Analysis: Gradually roll out new code versions to a small subset of users and monitor performance impact before full deployment, using tools like Canary deployments. 

5. Data-Driven Performance Testing:  

Leverage Historical Data: Use historical usage data to create realistic load profiles and test scenarios that reflect user behavior, leading to more accurate performance insights. 

Machine Learning for Load Injection: Utilizing machine learning algorithms to analyze past load patterns and predict future demands enables dynamic load injection for more efficient testing. 

NOTE: Performance testing is an iterative process. Continuously analyze results, refine test cases, and optimize your application for optimal performance and user satisfaction.  

Read More: Benefits of Automation Testing in Mobile App Development  

Why Choose Vlink for Performance Testing 

Proven Expertise: Vlink boasts a team of seasoned performance testing specialists with extensive experience across diverse industries and technologies. 

Comprehensive Solutions: Vlink offers a full spectrum of performance testing services, from test plan creation and tool selection to execution, analysis, and optimization. 

Cutting-Edge Tools and Infrastructure: Vlink leverages industry-leading performance testing tools and maintains a scalable testing infrastructure to provide accurate and comprehensive results. 

Customized Approach: Vlink tailors its performance testing solutions to your specific needs and budget, ensuring an optimal return on investment.

Ongoing Support: Vlink's performance engineers readily assist you throughout the testing process and beyond, offering expert guidance and problem-solving. 

plan-design-and-execute-test-cases-for-performance-testing-cta1

Read More: Latest Trends in IT Augmentation Services 2024

Partner with Vlink for Advanced Performance Testing Solutions

Partner with Vlink for Advanced Performance Testing Solutions

Vlink goes beyond basic performance testing, offering advanced techniques and expertise to elevate your testing practices. 

Microservices and API Testing: Vlink leverages specialized tools and frameworks to thoroughly test microservices and APIs in isolation and within the overall system. 

Mobile Performance Testing: Vlink's mobile testing team, equipped with diverse devices and network emulators, delivers comprehensive mobile app performance assessment. 

Cloud Performance Testing: Vlink's cloud expertise and automation tools enable accurate performance testing in various environments and configurations. 

DevOps and CI/CD Integration: Vlink seamlessly integrates performance testing into your DevOps pipelines, ensuring continuous performance assurance. 

Data-Driven Performance Testing: Vlink's data scientists leverage historical data and machine learning to create intelligent load profiles and optimize testing efforts. 

By partnering with Vlink , you gain access to a team of performance testing specialists who can guide you through every step, from selecting the right tools and techniques to implementing advanced strategies and deriving actionable insights. 

That’s it! I hope this blog provides valuable insights into advanced performance testing approaches and underscores the benefits of partnering with Vlink for success. 

Read More: Hire the Top 5% of Test Automation Engineers | Fixed Cost, Hourly or Monthly  

Conclusion 

By following the above steps and leveraging Vlink's expertise , you can confidently embark on your performance testing journey, ensuring your applications perform flawlessly and delight your users. Remember, performance testing is not a one-time event; it's an ongoing process that requires continuous monitoring, analysis, and optimization. Invest in our effective performance testing solution to deliver a seamless and competitive user experience. 

Frequently Asked Questions

Depends on app criticality, development pace, and traffic. Regularly for critical apps less often for others.

Better user experience, scalability, reduced costs, increased resilience, and data-driven decisions.

Choosing tools, creating realistic scenarios, interpreting results, and integrating them into development. 

Experts, comprehensive solutions, top tools, custom approach, ongoing support. 

Ready to optimize your app performance? Talk to our experts today ! 

Related Posts

What Is the cost to build a Ridesharing app like inDrive?

What Is the cost to build a Ridesharing app like inDrive?

Gain insights into the costs to build a rideshare app like inDrive. Explore the key factors, features and monetization strategies to plan your project budget.  

10 Reasons Developing Native Mobile Apps is Worth

10 Reasons Developing Native Mobile Apps is Worth It!

Explore 10 compelling reasons to invest in native mobile app development. Learn key benefits, best use cases, and how VLink can support your app goals.

Guide to LLM Product Development

The Complete Guide to LLM Product Development

Explore the ultimate guide to LLM product development! Learn the step-by-step process—from niche selection to deployment—plus insights on challenges, ethical considerations, and why VLink is the perfect partner for your LLM development needs.

Subscribe to Newsletter

Subscribe Newsletter image

Award-Winning Software Engineering & IT Staffing Company

image

Get In Touch!

Reports & White Papers

A practical guide to continuous performance testing.

You don’t need another thesis on the benefits of testing early and often. This guide is intended for performance engineers who get “why” to automate performance testing in CI/CD pipelines but need practical advice on “how.”

This paper helps you get from theory to practice. Specifically, we offer guidance on laying the foundation for a successful transition to an automated continuous testing approach, with pragmatic solutions to overcoming common automation blockers.

Topics include:

  • Strategies for prioritizing what to automate
  • Picking the right targets
  • “Easy” scripting
  • Best practices for developing dedicated performance pipelines
  • Overcoming test infrastructure obstacles
  • Ensuring trustworthy go/no-go decisions

Developed by performance engineers for performance engineers, “A practical guide to continuous performance testing” is your first best step to getting a continuous performance testing process up and running.

Join us for GLOBAL TESTER'S DAY 2022 celebration Register Now

QA Touch

#ezw_tco-3 .ez-toc-title{ font-size: 120%; font-weight: 500; color: #000; } #ezw_tco-3 .ez-toc-widget-container ul.ez-toc-list li.active{ background-color: #ffffff; } Table of contents

Real-world examples of performance testing in action.

Varun Sharma

It is crucial to conduct performance testing on software applications and systems to achieve optimal performance in real-world scenarios. This is particularly essential for websites and mobile applications that experience high traffic volumes or have intricate user interactions. 

Companies use performance testing to identify and address bottlenecks, improve response times, and manage heavy loads. Real-world examples of performance testing in action provide valuable insights.

Recommended Read: Performance Testing Tools

Why Real-World Examples are Important?

Real-world examples are essential in enhancing the understanding and application of performance testing methodologies in real-world scenarios. Analyzing how different organizations have executed these tests and their consequences can offer invaluable knowledge on the optimal approaches. Furthermore, gleaning from successful implementations can aid teams in better planning and preventing costly errors.

Performance testing is crucial to ensure a positive user experience by identifying and addressing potential issues before launch. This can result in optimal functionality in real-world situations, ultimately improving customer satisfaction, and retention, and fostering trust with customers over the long term.

Utilizing real-world examples is crucial for performance testing companies to ensure that applications or systems meet user expectations and deliver optimal user experiences. These examples offer valuable insights into effective testing strategies.

Studying real-world performance testing examples can offer valuable insights for development teams. By analyzing both successful and unsuccessful scenarios, QA teams can establish a trustworthy framework for ensuring optimal application or system performance in real-world situations. Planning for performance tests is a critical process that requires careful consideration of potential bottlenecks and issues that affects the system’s performance.

Performance Bottlenecks and Issues to Consider

Performance Testing Bottlenecks and Issues to Consider

Addressing performance bottlenecks and issues is crucial for development teams, as it can affect the application or system’s success in meeting user expectations.

Response Times

Response times are a significant factor to consider in performance testing. Slower response times can lead to user dissatisfaction and application abandonment. Therefore, monitoring response times during performance tests is vital for the optimal functioning of applications and systems in production environments.

Memory Leaks & Usage

Memory usage and leaks are essential metrics for performance that should consider during testing. A memory leak happens when a program allocates memory without freeing it, resulting in longer load times and decreased system stability. To identify memory leaks during performance testing, teams must track the amount of memory used over time.

Real user performance tests evaluate the capabilities of mobile applications under heavy loads and a wide range of traffic

Performance testing is a vital component of mobile application testing . The real user performance tests are used to imitate real user actions and evaluate how well the application performs in different circumstances. This kind of test can assist in spotting possible problems, such as prolonged page load times, high data usage, and non-responsive UI components.

Performance Testing Processes and Tools

Performance testing is an essential aspect of software development that entails designing tests to assess the performance of a system or application. Teams can leverage performance testing techniques and tools to identify problems before the system’s deployment into production. Load-testing tools are employed to gauge system response to heavy loads and replicate user traffic patterns.

Load Testing Tools

Load testing tools are used to evaluate how a system performs under heavy loads and simulated user traffic. They help identify any potential performance issues before the system is put into use. These tools measure response times, throughput, and memory usage while processing large amounts of data.

Examples of Companies that Use Performance Testing in Action

Performance testing is a crucial aspect of ensuring the expected performance of applications in production, and numerous companies are utilizing it to its fullest potential. Companies such as Amazon and Microsoft implement automated performance testing to maintain the stability and optimal functionality of their web-based services.

Google utilizes performance testing in its continuous integration process to detect and address any possible application problems before they are released to the public. Apple uses performance testing to confirm the functionality of their iOS applications on various devices, ensuring a consistent experience for all customers.

Companies use performance testing in their development processes to identify potential issues and improve their applications, ensuring the best possible customer experience. Performance testing is a useful tool for software development teams when implemented correctly.

Examples of Real-World Performance Testing in Action

Performance testing is a crucial aspect of software development, ensuring that applications meet user expectations for speed, responsiveness, and reliability. It helps prevent issues such as slow page loads, extended wait times for API calls, and crashes resulting from memory or resource overloads. The following are some examples of performance testing in action:

  • Load tests in web applications simulate real-life situations to detect scalability-related issues and bottlenecks. A test can measure the number of simultaneous users a website can accommodate before the system crashes or response times become too slow. These tests help developers identify areas that need improvement.
  • Performance tests for mobile apps evaluate battery consumption and network connectivity. It also evaluates the speed of launching and power usage during background tasks. These tests are conducted to enhance the user experience of the app.
  • Database performance testing is a technique that verifies that a database system upholds data integrity and access speed during query execution. These tests evaluate response times under varying conditions, including different data sizes or concurrent requests from multiple clients. Teams can utilize the outcomes to optimize their database design and guarantee their ability to manage significant data volumes without compromising performance.
  • Performance testing is a crucial aspect of software development that entails assessing and enhancing performance across various platforms. Comprehensive performance tests guarantee dependable, effective, and secure applications for users. This discourse will explore the significance of performance testing for web applications and websites.

Example 1: Web Applications & Websites

Performance tests are necessary for web applications and websites to manage huge data sets and increased traffic. These tests measure metrics like loading speed and memory usage to identify areas for improvement. Load testing simulates real-world scenarios and improves scalability. Performance tests also monitor website uptime and enhance user experience.

Example 2: Mobile Applications

Mobile apps need performance testing, web apps, and sites. Tests measure memory usage, battery consumption, and network latency. Load testing finds system limits and improves scalability. Performance tests check uptime across platforms and devices. Stress tests simulate high user loads to measure response times. Regular testing helps developers optimize for any device or OS.

Recommended Read: Mobile Testing Tools  

Example 3: Cloud-Based Applications

Cloud-based applications are popular software. Performance testing is necessary for optimal performance and scalability. Load tests help identify system limitations and improve scalability. Stress tests simulate real-world scenarios with high user loads or concurrent requests to measure response times. Latency tests measure request travel time. Network tests test application functionality in different network environments. Regular performance tests ensure optimized applications for all users.

About The Author Varun Sharma

Varun is the QA Lead for Devstringx Technologies , which offers top independent software testing services in India. His proficiency in Agile methodology enables him to test software at all layers of the Test Pyramid, including Unit and Integration testing. Additionally, he reduces QE efforts in end-to-end testing by incorporating more contract test automation in projects that follow a Microservices Architecture Development model. With his extensive knowledge and expertise in Functional and Non-Functional testing, Varun also provides training to offshore and onshore QE team members. “Fast Good Cheap Pick any Two” - Varun Sharma

Deliver quality software with QA Touch

Related posts.

Difference between Unit Testing and Integration Testing

Unit Testing Vs Integration Testing: What Is The Difference

performance testing case study examples

How To Inspect Element on Mac?

Comprehensive guide on system integration testing

What Is System Integration Testing?

Leave a reply cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Start Your FREE Lifetime Subscription Today

Speed up turning your test strategy into milestones. Start automating your testing in less than 6 minutes, with QA Touch

QA-light

QA Touch is an AI-powered test management platform, created by testers for testers. It simplifies collaboration between your development and QA teams, driving efficiency and delivering quality results.

1901 E Palm Valley Blvd Suite 109 Round Rock, TX 78664

+1 877-872-3252 --> [email protected]

facebook-icon

  • Integrations
  • Testimonials
  • Request A Demo
  • QA Masterclass
  • QA Touch API

Have questions on how to begin or anything related to our features and integrations? Starting from documentation to product videos, we're just one click away.

A Useful Guide on the 10 Best Software Testing Certification Courses in 2020

Copyright © 2024, QA Touch, Free Test Case Management Tool . All rights reserved. Initiative by  DCKAP

Terms Of Use   Privacy Policy

  • Case Studies
  • Product Tour
  • Get Started

Northway Solutions

Performance Testing 101: Determining Peak Load – A Case Study

Posted on Apr, 2013 by Admin

This is the fourth installment in a  multi-part series to address the basics of performance testing applications . It’s for the beginner, but it is also for the experienced engineer to share with project team members when educating them on the basics as well. Project Managers, Server Team members, Business Analysts, and other roles can utilize this information as a guide to understanding performance testing process, the typical considerations, and how to get started when you have never embarked on such a journey.

Let’s use a case study to walk through the thought processes used to determine peak load for an application performance test.

It is common that the load on a productive system is not constant over time, and that business activities cause statistical fluctuations as well as more systematic changes in the load.  A typical example often seen is time sheet recording, which most employees tend to do on Friday afternoon.  The peak load from such a usage pattern can be significantly higher than the average load.

Often a system sized and tested for such peak load is defined as a requirement.  Since performance should not degrade on Friday afternoon during peak load time, it is more important to determine what the peak load will be.  If estimates are too low, one risks insufficient hardware sizing, which leads to performance degradation.  If peak load sizing is too high, too much hardware may be purchased and the cost of ownership and maintenance is increased greatly.

An example: When considering peak loads, you might ask what is the highest load measured in req/sec that can be caused by 1000 users.  Theoretically the answer is 1000 req/sec if all 1000 users happen to click at exactly the same time.  However, since end-users work asynchronously, chances that this situation will occur are very small.

The extreme example above demonstrates the need to look at peak loads at different time intervals.  The maximum average load within a certain time interval is observed over increasing time intervals.  Sample measurements for 2000 logged-in users performing time sheet entries could be:

  • maximum average over one week: 2 requests/ sec
  • maximum average over one hour: 10 requests / sec
  • maximum average over one minute: 30 requests / sec
  • maximum average over a second: 100 requests / sec

This statistical phenomenon shows that the shorter the time interval, the higher the peak load.  In the above case, the long-term average load during office hours is only two requests / sec.  In the above example, peaks of about 50 times the long-term average load were observed.

One can argue that requests that cannot be served by the application immediately, due to a lack of resources, are queued and processed when the peak time is over.  A peak load of one second would cause performance degradation for a total of only a few seconds.  Based on this argument, it is up to the customer to find a reasonable compromise between the length of the time interval for which some performance degradation can be tolerated and the maximum peak load for which the hardware should be sized.  In the above case the decision was to size the hardware to 30 requests / sec in order to cover load peaks which might last for one minute and accept short intervals of a few minutes during which performance occasionally might be degraded.  Hardware and hardware operation costs of up to 300% were saved in comparison to second level peak load sizing.  This requirement would also be the goal for your peak load test.  If you have not yet discussed these aspects of peak load, you should do so with the end users to evaluate the requirements for your test strategy.

In the next installment of this blog series we will introduce the concepts around creating Business Process Profiles and Server Configuration Profiles.

Facebook

  • QE for GenAI Apps
  • AI-Powered QE
  •   - More AI-Powered QE Tools
  • AI Data Services
  • Continuous Automation
  •   - Explore Other Automation Services
  • Cloud Assurance
  •   - Cloud Hyperscalers
  • Data Assurance
  •   - More Data Services
  • Enterprise App Testing
  •   - ERP Services
  • Non-Functional Testing
  •   - More NFT Services
  • IoT/ Phygital Testing
  •   - More IoT/Phygital Testing Services
  • XR Assurance
  • Other ERP Packages
  • Test Automation
  • Performance Testing
  • Cyber Security Testing
  • ETL Testing
  • Multilingual NLP Assurance
  • Mobile App and Web Testing
  • IV&V and Regulatory Assurance
  • PoS Testing
  • Accessibility Testing
  • Service Virtualization
  • Software Testing Services
  • Mobile and Web Dev
  • DevOps Transformation
  • Serverless Cloud Native Development
  • Cloud Transformation and Migration Strategy
  • API Integration Services
  • Business AI
  • Modeling Optimization
  • Data Governance Services
  • Learning Platform Development
  • Video Encoding, Streaming and Player Development
  • Instructional Design
  • Content Remediation Services
  • Banking & Financial Services
  • Media & Entertainment
  • Retail & Consumer Goods
  • Manufacturing
  • Medical Devices
  • Capital Markets
  • Managed Testing Services
  • Managed Crowd Testing
  • Project-Based Testing
  • AI led Consulting
  • Global Delivery
  • Risk-Based Testing
  • Test.Predictor
  • Test.Consolidator
  • Keysight Technologies
  • Synthesized
  • White Papers
  • Case Studies
  • Podcast: The Testing Show
  • Technical Hub
  • Leadership Team
  • Diversity & Inclusion

Performance Testing vs Load Testing and Their Examples

Industries top insights, delivered to your inbox.

In software development, it is crucial to test applications under different scenarios. As end users, we just expect the app to work smoothly. No matter the number of people accessing it simultaneously. That is where the role of testing comes in.  

You might have heard about performance testing and load testing. The terms are used interchangeably. However, they serve different purposes. Let us know about these in detail. 

What is performance testing? 

Performance testing is a testing methodology to determine how an application performs in different scenarios. The primary goal is to ensure the check the speed and stability of the application. By testing the app in real world conditions, the developers identify the challenges proactively. It allows the developers to optimize the performance accordingly. 

Performance testing and load testing are important components of zero touch quality orchestration within quality engineering. 

Types of Performance Testing 

Here are the key types of performance testing . 

  • Load Testing: Simulating actual user traffic. 
  • Stress Testing: Testing beyond the expected load limits. 
  • Scalability Testing: Ensuring the application runs smoothly during peak demand. 
  • Endurance Testing: Checking performance over a long period. 

Load Testing 

It evaluates how an application performs under expected user traffic. It ensures that it can handle normal loads without slowing down. For example, an e-commerce website simulates 10,000 users per hour during a sale event, verifying that it runs smoothly without performance degradation. 

  • Key Metric: Response time, throughput, server resource usage. 

Stress Testing 

It pushes the application beyond normal capacity. It is done to observe how application behaves under extreme load. It identifies breaking points and failures. 

  • Example: Stress testing simulates 100,000-200,000 users for an e-commerce site to see if it crashes or slows down during peak load. 
  • Key Metric: System breakdown point, error rates, recovery time. 

Scalability Testing 

It ensures whether the system can withstand increased demand without affecting performance. 

  • Example: Coursera tests its platform to ensure it can scale from 10,000 to 100,000 users smoothly during course launches. 
  • Key Metric: Ability to scale up/down, resource efficiency. 

Endurance Testing 

It evaluates system performance under continuous load over an extended period to identify long-term issues such as memory leaks. 

  • Example: QuickBooks runs continuous transactions for 24-72 hours to check for memory leaks or crashes over time. 
  • Key Metric: Memory usage, response time stability, long-term system performance. 

What is Load Testing? 

It is a type of performance testing. It evaluates the behavior of an application under a specific load. It focuses on measuring the following factors when the app is subjected to a considerable number of concurrent users/transactions. 

  • Response Time 
  • Throughput 
  • Overall Stability 

Key Aspects of Load Testing 

Load testing is done to test the application in real load conditions. The key aspects tested during load testing are: 

1. Simulation: Under simulation, the application is checked by simulating multiple users accessing the application at the same time. It is done to check its performance. 

2. Performance Evaluation: Under performance evaluation, factors like response time, error rates and latency are checked. 

3. Threshold Testing: It helps in identifying the maximum load the application can handle before failing. 

4. System Behavior: It provides insights into how the application behaves under varying levels of load. It helps in future planning. 

Conducting load testing can be beneficial in many ways. In many cases, an app can perform well for 100 users but may crash when users reach 500. 

Also Read : Case Study- Qualitest Performs Load Testing for a Leading e-Learning Provider

What is the difference between Performance testing and Load testing?

This comparison table that clearly distinguishes Performance Testing from Load Testing : 

When to Use Performance Testing and Load Testing 

  • Use Performance Testing when you want a comprehensive evaluation of your application. This should be done during the early stages of development and before a major release to ensure optimal performance. 
  • Use Load Testing when you are preparing for peak user traffic, like during a promotional event or product launch. Also, it is wise to conduct load testing after major code changes or infrastructure modifications. 

Latest Tools in Performance Testing and Load Testing 

Here are the latest tools used for performance testing and load testing across the world. 

Why is the Distinction Important? 

For effective software evaluation, understanding the difference between performance testing and load testing is essential. Each type of testing targets different performance aspects. If the developers fail to understand the difference, it will lead to unexpected problems. If the focus remains solely on load testing, problems related to user experience might be missed out. 

Conclusion 

Understanding the nuances between performance testing and load testing is vital for developers and businesses alike. Each plays a unique role in ensuring applications run smoothly, especially under varying user loads.  

By investing time and resources into both forms of testing, you can build robust applications that satisfy user demands and expectations.  

Ready to Optimize Your Application’s Performance? 

Don’t let unexpected downtime affect your users. Ensure your application runs smoothly under all conditions by implementing effective performance and load testing strategies. Contact us today for a consultation or to learn more about our performance testing services. Let’s work together to keep your application reliable and user-friendly! 

New call-to-action

Other Technical Hub

Introduction: CSV files, or comma-separated-files, are used for storing data. They are a very commonly used text file format. Requirements:…

Introduction: Most SQL tables are non-temporary. Temporary tables can be used as well for convenience, or special purposes. Requirements: An…

Introduction: Relational Databases using SQL (Structured Query Language) may have a need for certain operations to be performed, or triggered,…

Want to talk with a test automation expert?

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

First page of “Performance Testing for Web based Application using a Case Study”

Download Free PDF

Performance Testing for Web based Application using a Case Study

Profile image of GRD JOURNALS

2019, GRD Journals

Performance Testing, is a type of testing performed to check how application or software performs under workload in terms of responsiveness and stability. The primary Goal is to identify and remove Performance bottlenecks from an application. This test is mainly performed to check whether the software meets the expected requirements for application speed, scalability, and stability. Providing the service of more than 20000 trains every day, Indian Railways is one of the world's busiest rail networks. Carrying more than 2,00,00,000 crore people every day, More than 6,00,000 ticket booking is made through online. So to measure performance of the system there is a need for performance testing. To ensure the performance of the system with more number of transactions, this testing is performed using any one of the performance testing tools. In this system, IRCTC web site as a Case study and this site is tested with more than 1,00,000 virtual users and its performance is shown with the help of Graphical charts. It can be used to simulate a heavy load on a server or group of servers, database, or network to test its strength, or to analyze overall performance under different load types.

Related papers

— Performance testing can be done by speed of a computer, network, software program, device or web application. It can be used to measuring the response time, throughput and resource utilization of the server by the system functions. The number of users hitting the application will be large and the site might not behave as it does for a single user the user might experience various issues while it was concurrently used the user could be face the various issues like internal server error, timed out error, application crash and slowness of the application. For example : when there was a big sale announced for snapdeal. The number of users hitting the application on that particular day is huge 100% higher than the usual users. Performance testing was the solution that could have prevented them from this situation. If they had planned for a proactive performance testing with the anticipated load they could have avoided this failure on their big day sale. Keywords— Client Response Time, Server Response Time, Testing the Performance.

Bulletin of Kharkov National Automobile and Highway University

Problem. Today, performance testing is an integral part of the web applications quality assurance whose performance failures and performance issues affect the business of their owners. Goal. The goal of the work is to generalize approaches and methods to improve the quality of web applications and develop recommendations for improving performance testing using open source tools. The object of research is the processes of testing web applications. The subject of research is the approaches, methods and tools of performance testing. Methodology. The study identified the impact of software performance testing on its quality and its main types, namely load testing, stress testing, volume testing, stability testing. The main stages of performance testing and their content were identified. To implement modern automated testing technologies, the advantages and disadvantages of the most popular tools for testing performance in the modern IT market and continuous visualization of their result...

Web application today are become more rich and complex. For building such application developers are using Ajax and Web 2.0 technologies. This powerful technologies offer advanced features for building user-friendly and highly interactive web applications that are providing quality end-user experience. Deploying web application is a challenge both in assuring that the functionality will be maintained and in guaranteeing that the functionality will be delivered with an acceptable performance. Performance problems can bring all kind of undesired consequences. For web applications especially in an e-commerce situation, performance testing is crucial. Performance testing is a type of testing that is performed, from one perspective, to determine how fast some abstract of a system performs under a particular workload. In this paper we discuss general concepts, practices and tools that lie at the core of performance testing web applications. Performance analysis tools from the open-source ...

This study was conducted on the importance of performance testing of web applications and analyzing the bottleneck applications. This paper highlights performance testing based on load tests. Everyone wants the application to be very fast, at the same time, reliability of the application also plays an important role, such that user's satisfaction is the push for performance testing of a given application. Performance testing determines a few aspects of system performance under the pre-defined workload. In this study JMeter performance testing tool was used to implement and execute the test cases. The first load test was calculated with 200 users which was increased to 500 users and their throughput, median, average response time and deviation were calculated.

The number of users accessing an ecommerce website is generally high. The performance of websites to cater to the increased number of concurrent users is inevitable to manage the growing online business needs. The architecture of the company's website should be robust enough to manage the expected traffic on heavy load. The inability to support the growing customer needs would return frustration that leads to heavy business loss. Therefore, it is mandate for e-commerce websites to perform load testing to assess the robustness of their architecture to support scalability. This study features the assessment of e-commerce websites for its performance based on throughput, availability and response time. The study utilizes Apache JMeter to perform load testing on selected five ecommerce websites in Thailand by emulating customer behaviors at heavy load levels. The paper proposes a methodology that could help future testing practitioners and researchers to perform load testing efficiently.

International Journal of Computer Applications, 2012

International Journal of Recent Trends in Engineering and Research

Religions, 2023

Türklerde Çevre ve Şehircilik, 2023

Music with Stanley Cavell in Mind, ed. D. LaRocca, 2024

flyccs, 2019

Makarenko G. Ejercicios y problemas de ecuaciones diferenciales ordinarias

Proceedings of the National Academy of Sciences, 1956

Applied Energy, 2013

Cardiac Electrophysiology Review, 2002

International Journal of Virology, 2010

Journal of Gay & Lesbian Mental Health, 2014

Proceedings of Azerbaijan High Technical Educational Institution, 2022

Education in the Knowledge Society (EKS), 2021

European Journal of Cancer, 1997

Related topics

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Load Testing Case Studies

Since 1999, Web Performance has helped some of the world’s most well-known organizations bullet-proof their websites against increasingly large volumes of traffic. Using our software or consulting services , we’ll help you optimize any site for any amount of visitors. How can we help you grow?

  • Web Performance Center
  • Case Studies
  • Calculators
  • Consulting Services Load Testing
  • Load Testing software
  • Terms of Use
  • Privacy Policy

Copyright © 2024 Web Performance, Inc.

DesignHammer – A Durham web design company

performance testing case study examples

(1) 919-845-7601 9AM-5PM EST

Just complete this form and we will get back to you as soon as possible with a quote. Please note: Technical support questions should be posted to our online support system .

Medical Device Academy logo

Performance Testing for a 510k Submission-Case Study-Part 2

Performance testing for a 510k submission case study (Part 2) explains the performance testing required for an FDA 510k submission.

510k part 2 Performance Testing for a 510k Submission Case Study Part 2

Performance testing is an essential part of new product development and is usually the last section that you can complete before your submission. In my previous 510k case study article, I showed you how to research the FDA classification database to determine if there is a special controls guidance document to follow in the preparation of your 510k submission. The example I used was for topical adhesives (i.e., MPN). Topical adhesives do not have any Recognized Consensus Standards listed. Instead, all the performance testing requirements are specified in the special controls guidance document.

How to find performance testing requirements

In this case study article, I selected a different product code that has Recognized Consensus Standards , but it does not have a special controls guidance document. After identifying the device classification and product code, if there is no Special Controls Guidance, you need to plan your performance testing based upon other sources of information. If there is no Special Controls Guidance document, I use three methods for determining what performance testing is needed:

  • Look for any device-specific standards
  • Review other 510k summaries
  • Order previous 510k submissions via FOIA requests

To learn more about creating a 510k test plan , please see our webinar on this topic.

For this case study, the product code selected was a bone fixation screw (i.e., HWC). The number of predicate 510k submissions to choose from for this product code is extensive. There are 29 from Arthrex alone. Some of these 510k submissions include a 510k statement, while others include a 510k summary. A statement is not directly helpful in identifying any of the performance testing that was used for the clearance of the potential predicate device. However, 21 CFR 807.93 requires that the company that submitted the 510k shall provide a redacted copy of the 510k submission within 30 days of the request. If this is requested early in your 510k project, you should have a copy of the submission in time to plan your performance testing for verification and validation of the subject device. You can also order predicate 510k submissions through the Freedom of Information Act (FOIA) request process.

In the case of a 510k summary, the summary indicates what performance testing was performed to demonstrate substantial equivalence. In the case of K103705, the section titled “Substantial Equivalence Summary” states that mechanical testing data for torque and pull-out testing was submitted for the subject device and the predicate device. Other 510k summaries may provide additional data or a more descriptive list of testing that was performed. In the case of this 510k example, there is a second product code listed: HRS, bone fixation plate. The HWC bone fixation screw product code indicates that there are 5 Recognized Consensus Standards:

  • ASTM F2026-14  Standard Specification for Polyetheretherketone (PEEK) Polymers for Surgical Implant Applications
  • ASTM F 897-02 (Reapproved 2013)  Standard Test Method for Measuring Fretting Corrosion of Osteosynthesis Plates and Screws
  • ASTM F1839-08 (Reapproved 2012)  Standard Specification for Rigid Polyurethane Foam for Use as a Standard Material for Testing Orthopaedic Devices and Instruments
  • ASTM F983-86 (Reapproved 2013)  Standard Practice for Permanent Marking of Orthopaedic Implant Components
  • ASTM F565-04 (Reapproved 2013)  Standard Practice for Care and Handling of Orthopaedic Implants and Instruments

Only three of the above standards are included in the list of eight Recognized Consensus Standards related to the HWS product code. One of those eight standards should probably be covered under the HWC product code, as well:

  • ASTM F543-13  Standard Specification and Test Methods for Metallic Medical Bone Screws

Now you have a total of six different device-specific standards that can be used for planning the performance testing of your bone screw. This is significantly more helpful than a 510k summary that says torque and pull-out testing was performed. After you have ordered and reviewed each of the standards, you then create a list of performance tests that apply to your screw and create an overall verification and validation plan.

It is essential to perform this review each time, because there may be new or revised testing methods established as the Recognized Consensus Standards are updated. If you outsource testing, then you will need to obtain a quotation from a testing lab for each of the applicable tests.

Once you have created a comprehensive testing list, and you have quotations for all the testing required, you need to schedule the testing and ship samples to the testing lab. Once testing has begun, this is the best time to start the preparation of your 510k submission. Performance testing often takes several months to complete. If you start preparing the 510k before you have ordered the testing, then you are starting too early, and you may have to change your performance testing summary multiple times.

If you start your 510k preparation after you order your testing, then you can create the entire performance testing summary. The only information that you will be missing is the final report number for each test being performed. For the most part, you do not need the specific results of the testing, because the tests are designed to show that the subject device is “equivalent” or “not worse” in performance. Quantitative comparisons between your subject device and the predicate device are not allowed by the FDA for a 510k submission. Your subject device must be “equivalent” or “not worse than” the predicate device concerning safety and efficacy.

Additional 510k Training

If you enjoyed our this performance testing case study and you would like more 510k training, please search our website for more articles. We wrote a 510k book in 2017 when we first started hiring consultants to grow Medical Device Academy from an independent consulting business to a consulting firm. The book was called, “ How to Prepare Your 510k in 100 Days .” Changes to the FDA 510k process have been rapid over the past 7 years, and the content is no longer relevant, but there is an  on-line 510k course  series consisting of 33 new FDA eSTAR webinars. You can also purchase our  webinars  individually.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

  • QA Advisory
  • Automation Advisory 
  • End to End Testing
  • Functional Testing
  • Test Automation
  • Web & Mobile Testing
  • API & Microservices Testing
  • Performance Testing
  • Usability Testing  

Accessibility Testing

  • Test Environment and Data Management
  • Data Testing and Analytics
  • Security Testing
  • AI Strategy and Consulting
  • Test Automation using AI
  • Functional Testing of AI Systems
  • Non Functional Testing of AI Systems
  • RPA Testing
  • Cloud Testing
  • Blockchain Testing
  • IoT Testing
  • MS Dynamics
  • Industry Platforms
  • Modern Application Development Services
  • Application Maintenance and Support
  • Application Security
  • Application Modernisation and Rationalisation
  • API and Microservices
  • Cloud Consulting
  • Serverless Cloud Native Applications
  • Cloud Modernisation and Migration
  • Operations and Support
  • DevOps Consulting
  • DevSecOps Implementation and Automation
  • Observability
  • Cloud Data Engineering
  • ETL and Data Warehouse
  • Database Operations Management
  • Consulting & Implementation
  • AI ML Application Development
  • Generative AI
  • NLP and Conversational AI
  • Data Science & Analytics
  • Robotic Process Automation
  • Prompt Engineering
  • Banking & Financial Services
  • Education & EduTech
  • Heatlhcare & Life Sciences
  • ISV & High Tech
  • Media and Entertainment
  • Non-Profit & Public Sector
  • Retail & e-commerce
  • Travel & Logistics
  • Tx-Automate
  • Tx-ReUseKit
  • Tx-TestMethods
  • Tx-SmarTest
  • Tx-HyperAutomate
  • Tx-DevSecOps
  • Tx-Insights
  • Management Team
  • Analyst Mentions
  • Global Offices
  • Case Studies
  • Latest trends

performance testing case study examples

  • October 28, 2024

Tx Assisted a Leading Insurance Company Achieve Complete Automation and Stable SIT Environment 

Tx worked closely with the client and their in-house teams, understood their business needs, and was involved in the testing activities for the system-integration-testing (SIT) environment. Line of Business (LOBs) that were part of this test engagement were Workers’ Compensation, General Liability, and Surplus across Billing, Policy, and Distribution Management.

All Case Study

OS modernization Case Study

Tx Accelerated POS Modernization for Leading Quick-Service Restaurant Chain

Tx defined obstacles, such as aging, insure, and unsupported systems, disparate POS systems at the store, tools lacking the ability to support continuous testing, and differences in localizations and recipes in local areas worldwide.

Leading OTT Platform Case study

Tx Boosts Performance and Scalability for Leading Super OTT App 

About Client  The client is a leading Media and Entertainment service provider with holdings in OTT, print, and electronic media. The client has a content discovery platform that offers a curated selection of movies and shows on various streaming platforms. It emphasizes offering users a customized viewing experience based on their preferences.  Business Outcomes  Client’s … Tx Boosts Performance and Scalability for Leading Super OTT App 

Audio Streaming service provider case study

Tx Helps an Audio Streaming Services Provider Achieve 50% Faster Time-to-Market 

Tx deployed a team in a hybrid (Onsite & offshore) model and performed test advisory to create automation roadmap and set up the Enterprise-level test automation tool, UiPath Test Suite-based solution by working closely with the client’s teams.

QA Center of Excellence

Tx Helped a Renowned Insurance Company in the US Establish QA Center of Excellence to Achieve High-quality Software & A Greater CX with Test Advisory Services 

Tx professionals focused on critical system aspects, including planning, process control, performance metrics, test automation, execution, defect tracking, and reporting. We helped the client achieve their respective goals.

Mobile testing case study

Tx Helped a Leading Mobile App Development Agency in UK Achieve Fully Functional Apps and Faster Time-to-Market 

Tx delivered end-to-end functional testing services to ensure that mobile apps are functioning seamlessly to deliver user experience (UX) and performed risk-driven functional testing to ensure optimal test coverage.

Us Insurance Company Success Story

Tx Helped a Leading Insurance Company in the U.S. Achieve a Fully Functional Website and Reduced Time-to-Market by 30%  

Tx worked with the client to understand various insurance processes and suggested a flexible and cost-effective framework. Reduced time to market by 30% and saved significant costs. 

Intelligent Transport System Case STudy

Tx helped an Intelligent Transport Systems Provider Save QA Costs by 40% with DevOps Services 

Tx performed DevOps consulting and enabled DevOps CI/CD deployment services with an offshore model to deploy their applications and microservices over the cloud (Azure). Tx Delivered a 40% QA cost savings due to the offshore model adopted. 

Farm Credit Services Case study

Tx helped a Leading Farm Credit Services Provider Achieve 90% Reduced QA Cycle Time and 50% Reduced Time to Market with Automated Functional Testing 

To help client achieve the required goal, we enabled end-to-end functional automation testing of their farm credit application suite comprising of more than ten applications using the recent DevOps CI/CD methodology.

IoT solutions case study

Tx Helps an IoT Solutions Provider in Belgium Achieve 100% Automation and Improved Customer Experience 

Workflow Solutions Provider casestudy

Tx Helped a Leading Workflow Solutions Provider Achieve 99.8% Improvement in Application Reliability

Tx helped the client with a test strategy & framework to offer automation and detailed performance testing results. We assisted the client by introducing independent testing and test automation . 

Investment Services Company Case Study

Tx Helped a Leading Investment Services Company Save 40% QA Cost & Achieve Successful Implementation of CRIMS System with Minimal Defects 

Tx Helped a Leading Investment Services Company Save 40% QA Cost & Achieve Successful Implementation of CRIMS System with Minimal Defects

BFSI Case Study

Tx Assisted a US-Based Financial Services Company Achieve 100% Automation and Reduce QA Costs By 40%

Tx partnered with the client to understand the application flow and assisted automating test cases and batch execution. Our client has successfully saved costs, streamlined the processes, and scaled the business.

Automated Testing Solution Case Study

Tx’s Automated Testing Solution Assisted a US-based Tourism Company to Achieve 40% Faster Time-to-Market 

Tx teams understood the client’s business processes and rules and suggested a flexible, customizable, and cost-effective framework. Teams performed test automation to verify the complete functionalities and populated test regression results in a summary report.

Workplace Management Software Case Study

UK-based Workplace Management Software Provider Achieved 90% Mobile Regression Suite Automation

Tx created a shared testing center for the client in a hybrid model, with scheduled onsite and offshore execution. Teams conducted a gap analysis of the client’s current QA processes and practices and recognized how well the QA stream could be integrated into their mainstream Software Development Life Cycle (SDLC). Further, the team recommended an implementation plan to bridge the identified gaps. 

performance testing case study examples

Tx Assisted an Insurance Company Achieve 40% QA Cost Savings & 50% Faster Time-to-Market  

They partnered with Tx to implement automation and continuous integration services to streamline their processes, reduce manual errors, and enhance their overall productivity as part of their digital transformation strategy.

Pet Medical Insurance Case Study

A Pet Medical Insurance Provider Achieved 30% QA Cost Savings & Fully Functional App by Partnering with Tx 

Tx, with its highly skilled team of QA resources, worked on clients’ requirements. The actual testing engagement started, and teams performed functional and database testing.

performance testing case study examples

Transformative Strategies: Accelerating Digital Assurance in a Changing Landscape

Sollicitudin nibh sit amet commodo nulla facilisi nullam vehicula. Egestas sed tempus urna sed do eiusmod tempor incididunt

Elevate

Don’t miss our update. Subscribe us for more info

Specialist Software Testing and Quality Engineering Consultancy | Prolifics Testing

Mastering Performance Testing: 5 Key Lessons from Our 20-Year Journey

In this section

  • Case Studies
  • Management Team
  • Prolifics Group
  • Accreditations
  • Our Charity

LinkedIn

Tuesday, 14 November 2023

Prolifics have been performance testing software applications, websites and mobile apps for over 20 years. Here are a few things we've picked up along the way:  

1. Good Outcomes Start with a Good Test Plan

Here’s how we approach planning, the most important bit to get right:  

  • Understanding Business Objectives: as a first step, it's vital to understand what the software aims to achieve from a business perspective. This involves communicating with stakeholders to establish performance goals and get them aligned with business outcomes. This includes understanding how the system will be used, who will use it and identifying peaks in demand.
  • Gathering System Requirements: Acquiring in-depth knowledge about the system architecture, technology stack, and infrastructure is needed, to define which toolset will be needed to drive the tests, plus how infrastructure monitoring will be managed.
  • User Behaviour Analysis: We conduct an analysis of user behaviour to identify the most common paths through the application (user journeys), along with peak usage volumes and expected growth in user numbers. This information is crucial for creating realistic load simulation models.
  • Defining Performance Criteria: We identify and define the performance criteria based on business objectives. Non-functional requirements may already be in place, but often they’re not. NFR’s feed into the analysis and results phase, when it’s then clear whether performance criteria have been met by each of the tests.
  • Test Environment Configuration: The test environment should mirror the production environment as closely as possible to ensure accurate results. This involves specifying the right hardware, network configuration and any other attributes, to get the test environment as close to production as possible.  
  • Tool Selection: Choosing the appropriate tools is fundamental. We select tools that can simulate the expected load and provide detailed analytics. Typically we use JMeter for web applications, using the Prolifics accelerators and pre-built environments. Where more complex applications and thick clients need to be tested, we will typically use either opentext LoadRunner or Tricentis NeoLoad.
  • Test Scenario Identification: With all the information in hand, test scenarios to measure the behaviour of the system will be identified, to utilise combinations of scripts and data to test the system. Examples include normal, peak, stress and soak test, all with different objectives.  

Through the planning phase, we lay the groundwork for a successful project. An important part of planning to highlight is also to gain consensus on approach, scope and volumes with our customers.  

2. Test Data: The Unsung Hero of Performance Testing

Test data is a critical, yet often underappreciated, component of performance testing. Many organisations we engage with are taken aback by the sheer volume and intricacy of data required to conduct meaningful performance tests. Here's why it's so pivotal:  

  • Realism Through Volume: To simulate real-world conditions accurately, it's not sufficient to use just a handful of user accounts. A unique account will be needed for each virtual user to mirror all the concurrent interactions that occur in production. This approach ensures that our tests genuinely reflect the varied user behaviours and interactions that the application will encounter.
  • Depth and Diversity of Data: Each script we develop to emulate user transactions is backed by data representing a wide range of possible inputs. We don't just need a record for each user interaction; we need distinct data sets for every iteration. Having a database stocked with a representative number of records also contributes to the accuracy of the tests.  
  • The Challenge of Single-Use Data: Often, the data we use in testing can be single-use, meaning once a virtual user performs a transaction, the data cannot be reused in its existing state. To overcome this, we’ve employed functional automation tools to replenish or reset data, ensuring that each test is as authentic and informative as the first.
  • Data Management Strategies: Effective data management is central to our performance testing regime. We've honed the practice of backing up data when it's in the 'right state', enabling us to reset the testing environment quickly and efficiently for multiple test runs. This practice saves significant time and resources, allowing for repeated testing without the need to recreate test data from scratch.
  • Preserving Data Integrity: We treat data with the utmost care to maintain its integrity throughout the testing process. This involves establishing protocols for data handling, storage, and backup, ensuring that the test data remains a reliable asset for the duration of the testing activities.  

3. The Importance of Correlation

Correlation in performance testing is the process of ensuring that dynamic values, such as session IDs and security tokens, are captured and correctly used throughout the test to mimic the behaviour of real users. This is fundamental for achieving accurate and meaningful test results, as it guarantees that each virtual user interacts with the application in a unique way, just as they would in a live environment.  

Without proper correlation, performance tests can yield misleading outcomes. For instance, an application might appear to handle load exceptionally well, but this could be due to all virtual users being unintentionally funnelled through a single session, thus not truly testing the application’s capacity to manage concurrent, independent interactions.   

We place significant emphasis on sophisticated correlation. By meticulously handling dynamic data, we ensure that each simulated user's journey is as close to reality as possible. This includes the correct passing of session-related information from one request to the next, mirroring the stateful nature of human interactions with the application.  

The attention to detail in correlation also extends to the adaptability of the test scripts. As applications evolve, so do the patterns of dynamic data. Our scripts are designed to be robust yet flexible, accommodating changes in application behaviour without compromising the integrity of the test.  

Correlation is not just a technical requirement; it's a commitment to authenticity in performance testing. By mastering this, we provide our clients with the confidence that the performance insights we deliver are both precise and applicable, ensuring that when an application goes live, it performs as expected, without surprises.  

4. Performance Engineering: Shift Left  

Performance Engineering is a proactive approach to ensuring software performance that goes beyond traditional testing to integrate performance considerations into every phase of the development lifecycle, especially within agile environments.   

Performance engineering isn't confined to testing; it's woven into the fabric of the development process. From design and architecture to coding and deployment, performance is a key consideration, ensuring that the application is robust and responsive from the ground up. By integrating performance engineering within agile development pipelines, we ensure continuous performance feedback and improvement. This integration allows performance metrics to influence design decisions in real-time, fostering an environment where performance is as prioritised as functionality.  

We use infrastructure as code (IaC) to set up and manage environments in a way that's repeatable and scalable. This practice ensures that our performance testing environments are consistent with production, leading to more reliable results. Within our CI/CD pipelines, we implement automated gates that assess performance. Code changes that do not meet our stringent performance benchmarks are automatically flagged, ensuring high standards are maintained.  

The shift-left strategy means performance testing is incorporated earlier in the development cycle. This approach helps to identify potential performance issues before they become costly to fix, reinforcing the efficiency of the development process. In line with agile principles, we establish continuous monitoring and feedback mechanisms. These provide ongoing insights into the application’s performance, enabling quick refinements and helping to avoid performance regressions.  

Performance engineering is a discipline that ensures software is designed for optimal performance. By embedding it into the agile development pipeline, we create applications that not only function as required but do so with the resilience and speed that modern users demand.  

5. Reporting and Analytics: Matching Results Against KPIs  

In performance testing, reporting and analytics are not merely about generating data—they're about delivering clarity and ensuring results align with key performance indicators (KPIs). Our reports are crafted to align the results of performance tests with predefined KPIs. These KPIs could range from page load time and transaction response times to concurrency levels and resource utilisation. Matching results against these benchmarks ensure we're not just collecting data but actively measuring success against business objectives.  

The 95th and 99th percentile measurements provide nuanced insights into application performance under stress beyond what average response times can show. By focusing on these percentiles in our KPIs, we're targeting the experiences of nearly all users, ensuring that the application meets performance standards even at its peak. Showing these important measures in a visual form using charts and graphs always goes down well and helps decision-making.  

Reporting and analytics in performance testing are about translating data into business intelligence. By ensuring our reporting is aligned with KPIs, we turn performance testing into a strategic asset, driving continuous improvement and operational success.  

We're passionate about performance testing and have an excellent UK team. Our clients are often repeat customers - there is real value in what we do - it's no exaggeration to say that every performance test we've run has picked something up which has resulted in a better, faster and more resilient software application once the problems are fixed. From database indexes, licensing caps, load balancer configurations, non-optimised code, and over-complicated reporting queries, we've seen it all.

Contact us for a no-obligation quotation or just some advice on what might be needed.  

performance testing case study examples

Jonathan Binks - Head of Delivery Prolifics Testing UK

  • Load and Performance Testing Services
  • Performance Engineering

Prolifics Logo Footer

performance testing case study examples

Article & Observations About Diverse Teams Working Better Together

Software Testing Client Project Case Study

Apr 21 • Case Studies

We are often asked what software testing is . The video below shares a solid definition of the term.

But we thought a software testing project case study might be helpful to better understand what software testers do on a typical day. This includes testing software, writing requirement documents for our clients, and creating user guides to ensure compliance for our clients to use for quality assurance and auditing purposes.

Iterators LLC was hired to complete accessibility testing for a few projects for the Library of Congress (LOC). Accessibility testing is required on all government websites, with Section 508 and WCAG 2.2 requirements used. To become a Trusted Tester an employee must complete the DHS Trusted Tester online training course requirements and pass the DHS Section 508 Trusted Tester Certification Exam of Homeland Security in Accessibility, and we are in a unique position to help on this project. We cross-train all our employees so that we can work on several projects at one time or several different aspects of a project to complete the work and reduce the cost to our clients.

Our first project assigned by LOC was testing their new braille feature on BARD Mobile for Android. We were tasked with testing the braille feature with several refreshable braille displays.

During our testing, we used the Orbit Reader 20 , and two different braille displays from Freedom Focus 14 and Freedom Focus 40 . There are plans to use other refreshable displays such as Humanware, but this testing has not occurred yet. We needed to test refreshable braille displays and their use in tandem with Google BrailleBack and Google TalkBack .

This work was to ensure that all hardware worked as expected with the apps we were testing. For this testing, we had to complete functional testing, smoke testing, exploratory testing and had a user panel to ensure we caught all issues that a visually impaired individual might experience while using the app.

Initially, our client was unsure if we would find any bugs and hesitant to have us enter bugs into Bugzilla as they stated the software was “complicated”. Bugzilla is a web-based general-purpose bug tracking system and not unlike other bug tracking systems we use every day such as Jira, TestRails, PractiTest, and Click-Up.

Testing was completed over several agile sprints with many significant software testing bugs found. Our testing had us test against the National Library Service requirements document. Next, we had to create an up-to-date user manual. While the manual had been updated several times, the testing had not been.

For example, when downloading a book or magazine from the Now Reading section of the mobile app, the download would end up at the bottom of the page. For years, the user guide had listed the download being at the top of the page once the document was downloaded.

Our testing team, on several occasions, said this was an error in the documentation and that the download ends up at the bottom of the page. This was corrected in the user document and sent to the development team to fix per the requirement document.

Over the next several months, we reported 30 high-priority bugs with about half fixed at this point. We have encouraged our client to test in an agile fashion because once the development team is finished, it’s harder to get these bugs fixed.

Our bugs were reported and based on the requirement document used to create the software. Lastly, the user guide had to be rewritten to reflect the app's behavior and general updates.

Once the app was tested and created, the user guide was sent to Communication Services to ensure the style used per other requirement documentation. This document had to be approved before being disseminated to the public. For example, how does the library determine what the Most Popular Books are, and over what period.

Once the document was returned from COS, this PDF document had to be remediated . The process of remediation is to make a PDF, create the heading for the document, alt text added to meaningful images, and decorative images either ignored or taken out of the digital document altogether.

Once the remediation process is complete and validated, the document becomes ADA-compliant. We then provide an accessible PDF that can be read with the use of a screen reader and create the HTML output so that the document can be added to the Library of Congress website.

You can find the current user guide we completed here: https://www.loc.gov/nls/braille-audio-reading-materials/bard-access/bard-mobile-android/#creatingfolders3.3

Case studies can be a great learning tool in software testing and project management. By looking at project case study examples, you can see how the project was planned and executed, as well as how certain tasks were managed. This can give a better understanding of what software testing involves on a daily basis. With the right software testing case studies example, software testers can hone their skills, improve project performance, and ultimately deliver better software testing results.

Related Resources:

  • Crafting an Effective Test Plan: A Step-by-Step Guide
  • Top Test Management Tools
  • Mobile Application Functional and Performance Testing

About the Author

Jill Willcox has worked on accessibility issues for most of her professional career. Iterators is an inclusive women-owned small business (WOSB) certified by the Small Business Administration and WBENC. We provide software testing services for websites, mobile apps, enterprise software, and PDF remediation services, rendering PDFs ADA compliant.

Jill Willcox

Jill Willcox

Clutch names iterators llc as a top certified women-owned business for 2022.

Iterators LLC named Top Certified Women0owned Business Again 2022

Test Strategy vs Test Plan: What’s the Difference?

What is the difference between a test strategy and a test plan? Read this article to b...

May 05 • Reference

Performance Test Plan

The Performance Test Plan concerns this particular type of testing and any conditions. In the same way as the general Test Plan, the Performance Test Plan should always reflect the real state of the project.

The Performance Test Plan should cover the following areas:

  • entry and exit criteria;
  • environment requirements, along with dependencies and constraints, load injectors, and test tools used in the process of testing;
  • the performance testing approach (including target volumes, the selected number of users and data to be loaded with, assertions, and load profiles);
  • performance testing activities (including test environment build state, use-case scripting, test scenario building, test execution and analysis, and reporting).

Below you can find an example of the performance test plan prepared by the QATestLab team for performance testing (of the popular social network framework).

QATestLab Performance Test Plan

IMAGES

  1. How to write a performance test case

    performance testing case study examples

  2. Types of Performance Testing with Examples

    performance testing case study examples

  3. PPT

    performance testing case study examples

  4. SYSTEM PERFORMANCE TESTING A CASE STUDY Part 1

    performance testing case study examples

  5. 721014 1 case-study-performance-management

    performance testing case study examples

  6. Performance Testing Tutorial: Learn With Its Types And Examples

    performance testing case study examples

VIDEO

  1. 07 Stats AB testing Case study I 2024 09 29

  2. G-Gillette Business Case Study Live. #shorts #businesscasestudy #brandstrategy #gillette

  3. Transforming Software Testing: Achieving TMMi Level 5 Certification

  4. Project Case Study Analysis

  5. Types of Software Testing: A Comprehensive Guide

  6. 🔴 A/B Test Best Practices and Fun Case Studies from Deliveroo! 🚚💡

COMMENTS

  1. Performance Testing and Resolution Case Study

    Performance Testing and Resolution Case Study The Client. The client is a global shipping company that serves the world's leading oil and gas companies. With over 6000 employees in offices worldwide and a fleet of over 150 ships, the client requires consistently reliable software support and maintenance. The Challenge

  2. Does Performance Testing Matter? Yes, and Here's a Case Study

    Performance Testing Case Study. Let us take the following case study for how performance testing may help optimize cost and scalability. An application is currently deploying on AWS m4.large EC2 instances and is experiencing some performance issues. These issues need to be addressed before the upcoming holiday season.

  3. A Complete Performance Testing Guide With Examples

    List of ALL the Performance Testing Tutorials in this Series: Tutorial #1: Performance Testing Complete Guide (This Tutorial) Tutorial #2: Difference Between Performance, Load and Stress Testing. Tutorial #3: Functional Testing Vs Performance Testing. Tutorial #4: Performance Test Plan and Test Strategy.

  4. How to write a performance test case

    The aim of a scenario is to simulate real world usage of a system. Writing a test case for performance testing is basically writing a simple Requirements Specification for a piece of software (the test script). Just as with any specification, it should be unambiguous and as complete as possible. Every test case will contain the steps to be ...

  5. Testing and QA: Performance testing Case Studies

    Performance Testing of Corporate Applications for a Wood Product Manufacturer. Read More. Show more success stories. Filter projects. Discover the impact of our Performance testing solutions in Testing and QA. Click to learn more about our success stories.

  6. How To Plan, Design, And Execute Test Cases For Performance Testing

    Now that you've grasped the fundamentals of planning, designing, and executing test cases for performance testing, let's explore some advanced techniques to enhance your testing effectiveness further. 1. Performance Testing for Microservices and APIs: Shift-Left Testing: Integrate performance testing earlier in the development lifecycle for ...

  7. A practical guide to continuous performance testing

    A practical guide to continuous performance testing. You don't need another thesis on the benefits of testing early and often. This guide is intended for performance engineers who get "why" to automate performance testing in CI/CD pipelines but need practical advice on "how.". This paper helps you get from theory to practice.

  8. Real-World Examples Of Performance Testing In Action

    The following are some examples of performance testing in action: Load tests in web applications simulate real-life situations to detect scalability-related issues and bottlenecks. A test can measure the number of simultaneous users a website can accommodate before the system crashes or response times become too slow.

  9. Performance Testing 101: Determining Peak Load

    In the above example, peaks of about 50 times the long-term average load were observed. One can argue that requests that cannot be served by the application immediately, due to a lack of resources, are queued and processed when the peak time is over. A peak load of one second would cause performance degradation for a total of only a few seconds.

  10. Performance Testing vs Load Testing and Their Examples

    Read this blog and know the difference between performance testing, and load testing with examples. Skip to main content About. Leadership Team ... Example: Stress testing simulates 100,000-200,000 users for an e-commerce site to see if it crashes or slows down during peak ... Case Study- Qualitest Performs Load Testing for a Leading e-Learning ...

  11. Performance Testing for Web based Application using a Case Study

    2019, GRD Journals. Performance Testing, is a type of testing performed to check how application or software performs under workload in terms of responsiveness and stability. The primary Goal is to identify and remove Performance bottlenecks from an application. This test is mainly performed to check whether the software meets the expected ...

  12. Load Testing Case Studies

    Web Performance, Inc. increased this SharePoint ™ site's capacity from 100 to 2000 users, and reduced server requirements to less than 25% CPU utilization, eliminating two servers altogether and reducing potential software license expenses by nearly $200,000. Load testing quickly locates source of CPU overload, providing maximum capacity ...

  13. Performance Testing for a 510k Submission-Case Study-Part 2

    The example I used was for topical adhesives (i.e., MPN). Topical adhesives do not have any Recognized Consensus Standards listed. Instead, all the performance testing requirements are specified in the special controls guidance document. ... If you enjoyed our this performance testing case study and you would like more 510k training, please ...

  14. Case Studies

    Tx Helped a Leading Insurance Company in the U.S. Achieve a Fully Functional Website and Reduced Time-to-Market by 30%. Tx worked with the client to understand various insurance processes and suggested a flexible and cost-effective framework. Reduced time to market by 30% and saved significant costs. Read Case Study.

  15. Mastering Performance Testing: 5 Key Lessons from Our 20-Year Journey

    5. Reporting and Analytics: Matching Results Against KPIs. In performance testing, reporting and analytics are not merely about generating data—they're about delivering clarity and ensuring results align with key performance indicators (KPIs). Our reports are crafted to align the results of performance tests with predefined KPIs.

  16. Experience with performance testing of software systems: issues, an

    Elaine J. Weyuker, Senior Member, IEEE, and Filippos I. Vokolos. Abstract ÐAn approach to software performance testing is discussed. A case study describing the experience of using this approach ...

  17. PDF Metropolitan Business Academy: A Case Study in Performance-based Assessment

    Survey Results, "Student Views on Assessment at MBA," Questions 1-3. For questions 1-5, students were asked to indicate, on a scale from 1 to 5, which option best matches their experience with the given statement (where 1 is 'Very easy' and 5 is 'very difficult'). Mean. Standard Deviation.

  18. Case Study: Example of Software Quality Assurance Testing

    Software Testing Client Project Case Study. We are often asked what software testing is. The video below shares a solid definition of the term. But we thought a software testing project case study might be helpful to better understand what software testers do on a typical day. This includes testing software, writing requirement documents for ...

  19. Performance Test Plan Example

    The Performance Test Plan should cover the following areas: performance testing activities (including test environment build state, use-case scripting, test scenario building, test execution and analysis, and reporting). Below you can find an example of the performance test plan prepared by the QATestLab team for performance testing (of the ...