The 3 Ephemeral Computing Skills Every Performance Test Engineer Needs to Have

This is a guest posting by Bob Reselman

Ephemeral computing is the practice of creating a virtual computing environment as a need arises and then destroying that environment when the need is met, and the resources are no longer in demand. You pay only for what’s used when it’s used. The value proposition of ephemeral computing is hard to ignore.

Yet, for all the benefit ephemeral computing provides, it does come with risks.

A while back I was working on a project that was designed to run a complex set of algorithms for a short time and then send the result of the work on to interested parties. It was a perfect use of ephemeral computing. So that’s what did: I spun up the computing environment in the cloud, injected the code, tested it, and then destroyed the computing environment. I did the work manually on Google Cloud. Everything worked as planned.

Then one day I dropped the ball. I neglected to delete the testing environment when the test had completed. In fact, unbeknownst to me, I let it run for the remainder of the month. At the end of the month, I got a bill from Google for around $40. I was a bit shocked. My usual bill for my proof-of-concept experiments runs about $4. It turns out that forgetting to shut down the computing session had incurred the unanticipated expense.

Now, this is not a lot of money, but that’s not the point. The ephemeral computing scenario I was implementing was a proof of concept for a large system that would need an hour of supercomputing horsepower to do the job. Had we moved out of proof of concept and into real-world implementation, my little oversight would have cost about $20,000, which is the monthly price of a virtual machine on AWS powered by four Intel Xeon E7 8880 v3 processors, offering up to 128 vCPUs and 3,904 GiB of DRAM-based memory. That’s real money.

Fortunately, I learned my lesson about ephemeral computing. It’s a great approach that’s particularly suited to performance testing scenarios that need to have computing capabilities that emulate production environments. However, just using ephemeral computing blindly can incur unanticipated costs that could have been avoided.

What I learned is that I really needed three essential skills if I wanted to use this technology safely. Anyone taking advantage of ephemeral computing should know how to:

  • Calculate the cost of scale
  • Script ephemeral computing sessions
  • Use orchestration technology

Let’s examine each of these skills.

Calculate the Cost of Scale

Ephemeral computing, Calculate the cost of scale, Script ephemeral computing sessions, Use orchestration technology, Creating a virtual computing environment, TestRail

The minute you spin up an ephemeral environment, the meter is running. The longer the meter runs, the more money you spend, whether you intend to or not. Unless you want a grand surprise at the end of the month from your cloud provider, it’s a good idea to know the cost of your computing environments before start.

You’ll do well to get in the habit of using a cost calculator to estimate the actual runtime expense of an intended ephemeral computing environment. All the popular cloud services provide such a calculator. AWS, Google Cloud, and Azure each have one. These cost calculators should be the go-to tool in your cloud computing toolbox.

A cost calculator allows you to create one or many environment configurations. Once you set your intended configurations, the cost calculator will report the project cost over a period of time — usually a month. (See figure 1.)

The 3 Ephemeral Computing Skills Every Performance Test Engineer Needs to Have 5

figure 1: Examples of cost calculators: (1) AWS, (2) Google Cloud, (3) Azure

Using a cost calculator as a matter of habit before starting work will go a long way toward preventing the end-of-the-month billing surprises common in too many companies.

Script Ephemeral Computing Sessions

Ephemeral computing, Calculate the cost of scale, Script ephemeral computing sessions, Use orchestration technology, Creating a virtual computing environment, TestRail

Manually provisioning an ephemeral computing environment is an easy first step for getting a feel for the practice, but it’s also a hazard. Just look at my oversight that I described previously. I spun up an environment and forgot to destroy it. The little lapse of memory cost money and yielded no benefit. Such mishaps are not unusual when manually provisioning ephemeral environments.

Humans are wonderfully creative when it comes to design, yet frightfully unreliable when it comes to doing repetitive tasks. Scripting solves the problems that go with repetition. The machine will do what it’s told to do, the same way, every time. When you find yourself doing the same provisioning tasks more than twice, it’s probably a good time to script the process.

The pattern for ephemeral computing is relatively simple in terms of concept: create the environment, inject the code, test the code and destroy the environment. (See figure 2.)

figure 2: Creating an ephemeral environment and testing against it has become a standard design pattern.

figure 2: Creating an ephemeral environment and testing against it has become a standard design pattern.

Of course, the devil is in the details. Creating computing instances in script typically requires the use of tools from a particular cloud provider. For example, to create a virtual machine in Google Cloud, you’ll need to use the Google Cloud SDK. The same is true for Amazon Web Service, Azure, and other independent public cloud providers. Once the virtual machines are created, you can write a script that accesses the remote machine using a standard tool such as secure shell (SSH) in Linux or WinRM under Windows. Then, it’s a matter of running scripted commands via remote access. Such scripts can include setting up the application and dependencies, as well as running tests. Once the tests have run or the computing need has been met, you’ll use the appropriate SDK to destroy the computing environment.

Learning to script an ephemeral environment can take some time, particularly if you don’t have the basic programming required to work with a given cloud service via an SDK. But once the learning curve is absorbed, there is some significant efficiency to be realized. Also, the good news is that there are a number of provisioning tools and technologies available, such as Ansible, Chef, and Puppet, that alleviate some of the burdens of scripting an ephemeral environment.

Use Orchestration Technology

Scripting is good, but it can be extraordinarily time-consuming, particularly when it comes to creating a complex computing environment that has a lot of components and dependencies. Orchestration makes things easier.

Whereas scripting is about the recipe, orchestration is about the cake. Essentially, orchestration is defining a system and then telling an orchestration technology, such as Kubernetes, Docker Swarm, or Mesosphere, to go make it. (See figure 3.)

Scripting creates an ephemeral environment procedurally, while orchestration creates one declaratively.

figure 3: Scripting creates an ephemeral environment procedurally, while orchestration creates one declaratively.

At a conceptual level, a scripted approach consists of defining and following a series of steps to create a computing environment. With an orchestrated approach, an engineer defines the characteristics of a system. This characterization is called the system state. Once the state of the system is defined, the orchestration technology not only creates the system according to the state definition but also ensures that the state is maintained. This means that if for some reason one of the resources in the computing environment fails, the orchestrator will replenish the resource automatically.

Orchestration puts containers front and center in the ephemeral computing landscape. Containers are just as independent as virtual machines, but they are more lightweight, are faster to deploy, and make more efficient use of a host’s computing resources. One VM can host tens or even hundreds of containers. Containers also provide more granularity around a particular computing need. In fact, containers are a driving force behind many microservice architectures.

The benefit of using container-based orchestration over simple provisioning is that the system is extremely reliable operationally. Also, the orchestration technology takes care of the plethora of details that go with creating, maintaining, and destroying a complex system. All the human needs to do is define the system. The orchestration technology makes it.

But, as with any new approach, there is a learning curve. The learning curve for a technology such as Kubernetes can be daunting, but overcoming it is time well spent. Had I used an orchestration technology in the project I described previously, I’d still have to write a script, but the scope of the script would have been limited to creating the VMs to host the containers, running the orchestrator, executing my work and then destroying the VM hosts. Yes, the design pattern is the same as the one I described for scripting above in figure 2, but using orchestration would have simplified the process considerably.

Putting It All Together

Ephemeral computing, Calculate the cost of scale, Script ephemeral computing sessions, Use orchestration technology, Creating a virtual computing environment, TestRail

Ephemeral computing really is the future of computing. As such, it’s going to become more important for testing and test practitioners. Just as application developers and system admins are becoming adept at being good stewards of the enterprise’s digital resources, so too will those whose responsibility it is to ensure that those resources work as intended. For the modern test practitioner, having a good operational grasp of the basics of ephemeral computing is not simply a skill that’s nice to have — it’s essential.

Article by Bob Reselman; nationally-known software developer, system architect, industry analyst and technical writer/journalist. Bob has written many books on computer programming and dozens of articles about topics related to software development technologies and techniques, as well as the culture of software development. Bob is a former Principal Consultant for Cap Gemini and Platform Architect for the computer manufacturer, Gateway. Bob lives in Los Angeles. In addition to his software development and testing activities, Bob is in the process of writing a book about the impact of automation on human employment. He lives in Los Angeles and can be reached on LinkedIn at www.linkedin.com/in/bobreselman.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.

General, Business, Software Quality

DevOps Testing Culture: How to Build Quality Throughout the SDLC

Organizations need conceptual and QA has crucial strengths to effect that change. Here are three winning action plans to change your QA culture and integrate it with the rest of your SDLC.