Posts

The cloud IaaS market is an extremely diverse and dynamic landscape characterised by constant upheaval. In this context, we present interesting and useful news and updates related to the IaaS market and our research in this space.

Smart CloudMonitor - Cross-Layer Monitoring of Cloud-based Applications

Post Date: 15/02/2016

Rightscale has recently released its 2016 State of the Cloud Report where it has identified cloud cost optimization as one of the main challenges that organizations face today. While monitoring and rightsizing instances is the most common approach for cost optimization, most monitoring tools focus either on resource level monitoring (i.e CPU, memory, disk) or application level monitoring (i.e. response time, throughput, error rate etc) and tend to ignore the intermediate layers. But the cloud is a multi-layer service model comprising of infrastructure, platform and software. Cloud-based application performance can vary due to a number of different reasons - change in user workload, issues at the software layer, platform layer or the infrastructure layer.

Therefore, there is a need for a holistic monitoring solution that monitors across the different layers and correlates the monitored data to help identify the cause of performance degradations and ensure quality assurance. At Swinburne, we are currently working on a multi-layer cloud performance monitoring tool by extending the Smart Cloud Monitor. In addition to monitoring resource consumption and application performance, it also monitors the intermediate components at the PaaS layer, on top of which the application is deployed to give a more accurate picture of the performance of the cloud-based application.

The monitoring dashboard below shows how the resource consumption at the VM level and the JVM level and the application performance at the application level. The VM CPU and the Server CPU consumption is close to zero but then increases rapidly until it it hits 100% at which point the server temporarily stops responding

Smart CloudMonitor Extended
Figure: Cross-cloud monitoring - Smart CloudMonitor

Global IaaS Market - Is it a case of the survival of the richest?

Post Date: 20/05/2015

The cloud IaaS market is in a constant state of upheaval and the competitive landscape seems to be shifting with many service providers failing to compete with the Big Three. While the global spending on IaaS is expected to reach US $16.5 billion in 2015 (an increase of 32.8 percent from 2014), the Gartner Magic Quadrant for Cloud Infrastructure as a Service 2015 shows that only a handful of global providers the financial resources to invest and compete seem to be maintaining or growing their share of the market. Three players are pushing for market domination - Amazon Web Services, Microsoft Azure and Google Compute Engine with perhaps IBM Softlayer chasing them. If the trend of the past 3-4 years is to continue going forward, these providers will completely dominate the IaaS market with other providers either having to discontinue, or change their business strategy to being niche players.

Figure: Gartner Magic Quadrant for Cloud Infrastructure as a Service (Last 3 years)

Is cloud infrastructure a commodity as yet?

Post Date: 09/04/2015

A key objective of the Deutsche Borse Cloud Exchange is to offer a spot market where organizations can trade computing resources just like other commodities such as oil and electricity. While main stream IaaS providers offer different cloud server specifications, Deutsche Borse Cloud Exchange offers the same server specifications across all cloud providers, making them directly comparable on paper. However, a key question that remains to be answered is whether making the server specification similar is sufficient to make cloud infrastructure a commodity.

Recent tests that we conducted on "similar" servers from three different providers participating in the Deutsche Borse marketplace show that cloud infrastructure cannot be considered as a commodity as yet. Here's why.

We ran tests on three standard medium servers from three providers using Smart CloudBench. For each provider, we ran two scenarios. In Scenario 1, we co-located the Test Agent (TA) (which was also a standard medium server instance) with the System Under Test (SUT) to minimize the impact of the network on application performance. In Scenario 2, we located the Test Agent in the Nectar cloud infrastructure (at the Tasmanian Data Center).

Table I: Price and Configuration of Benchmarked Servers
IDProviderMemory (GB)vCPUStorage (TB)Price (Euro/hr)
Standard-MediumProvider A840.50.5545
Standard-MediumProvider B840.50.4775
Standard-MediumProvider C840.50.4475

We subjected the SUT to a consistent load of 500 concurrent users trying to access a transactional web application deployed on the SUT. The tests were run every 30 minutes for a period of 96 hours. Given that the configuration of the servers is exactly the same, one would expect the performance to be the same or similar. However the results show otherwise.

Figure I: Performance Results from Scenario 1
Figure II: Performance Results from Scenario 2

Key Observations:

  • 1. We can see that although the three servers had the same configurations, their performance in the conducted experiments was different.
  • 2. The server provisioned by Provider B showed worst performance in both scenarios with increasing deviation in response time, while the other two servers had a very low and consistent response time.
  • 3. The CPU consumption was higher on the server provisioned by Provider B.
Conclusion:If three identical servers from three different providers return different results when subjected to exactly the same workload conditions, they cannot be considered as a commodity.

Beta-trial of Deutsche Boerse Cloud Exchange

Post Date: 25/11/2014

The IAT Smart Cloud Marketplace team has been invited to participate in the beta-testing of Deutsche Boerse Cloud Exchange, a marketplace for cloud resources where buyers and sellers can trade computing resources on a single platform. We will post on our experiences using Deutsche Boerse in the near future.

Does cloud server location affect performance?

Is there a difference in the performance levels when the client and server are located in different geographic locations? We conducted a simple experiment on Amazon EC2 where we first placed the test agents (TAs) in the Sydney date centre and tested the system under test (SUT) servers located in Sydney and California. We then placed the TAs in the California data centre and tested SUT servers in Sydney and California. Watch the video below to see the test results.

How reliable are virtual cloud servers?

Post Date: 17/05/2014, Test Date: 30/01/2014

If you have a number of dedicated servers with a specific configuration, you would expect them to offer the same level of performance when repeatedly subjected to the same workload. However, can we expect the same behaviour from virtual cloud servers? We ran a very simple experiment on a leading cloud provider. We tested three identical virtual cloud servers running in the same region and belonging to the same subnet. We subjected them to the same workload over a period of 12 hours. Given the fact that the servers were identical (they had the server specification, were located in the geographic location and also in the same subnetwork), one would expect the three servers to exhibit similar performance over time. But the results show otherwise.

Table I: Price and Configuration of Benchmarked Server
IDCategoryMemory (GB)vCPUPrice (USe/hr)
n1-standard-4Standard1540.415
Figure I: Average Response Time (in milliseconds)

Measuring resource consumption during application benchmarking on cloud infrastructure

Post Date: 17/05/2014, Test Date: 29/01/2014

Performance benchmarking at the application stack can provide an estimation of how a particular type of application performs when hosted on cloud infrastructure. However, it does not provide any indication of whether the server is over-provisioned or under-provisioned. Smart CloudBench supports multi-layer performance monitoring which gives greater insight into the implications of the infrastructure capabilities on the application workloads.

We conducted simple benchmark tests on three different types of servers on a leading IaaS provider using an enterprise application benchmark and measured the resource consumption during the tests. The results are plotted in the graphs below:

Table I: Price and Configuration of Benchmarked Servers
IDCategoryMemory (GB)vCPUPrice (USe/hr)
n1-standard-4Standard1540.415
n1-highmem-4High Mem2640.488
n1-highcpu-4High CPU3.640.261

Figure I: Average Response Time (in milliseconds)

Key Observations:

  • 1. We can see the relation between the application workload and the corresponding CPU/RAM utilization. The resource consumption increases with increasing workload.
  • 2. We can see that all three servers which have the same vCPUs allocated to them, have comparable CPU consumption patterns.
  • 3. We can estimate the RAM requirements for the benchmarked application. It is obvious that the High Mem and Standard servers have a lot of unused memory for which the user has to pay.

While this simple example shows the benefits of measuring resource consumption during application benchmarking on cloud infrastrcuture, there are additional benefits of aggregating application performance with infrastructure performance. These include identification (a) of over-provisioned and under-performing resources, (b) resource issues that are affecting application performance, (c) resource consumption patterns under different workloads, and (d) estimation of the reliability of cloud infrastructure. Aggregated performance results will be discussed in a future post.

Can you spot the inconsistency in the performance results?

Post Date: 18/10/2013, Test Date: 28/08/2013

We recently conducted the performance benchmarking of a large IaaS provider in Australia and got some very interesting results. We selected three large servers in the Sydney region and used workloads of 500 and 1000 concurrent clients against the TPCW benchmark application. We ran the tests continuously for 5 days starting on Friday, 23/08/2013 at 6 pm and finishing on Wednesday, 28/08/2013 at 6 pm. Can you spot the inconsistency in the obtained results?

Table I: Price and Configuration of Benchmarked Servers
ID Memory (GB)vCPUPrice (AUD/hr)
S1 8 4 0.629
S2 16 8 1.246
S3 24 8 1.8

Figure I: Average Response Time (in milliseconds)

Figure II: Maximum Response Time (in milliseconds)

Does a more expensive cloud server offer better performance?

Post Date: 10/10/2013, Test Date: 22/04/2013

The objective of the benchmarking exercise was to measure the system performance of web/app servers running on the public cloud infrastructure offered by Amazon, Rackspace and GoGrid. We used the TPCW benchmark application (a benchmark for enterprise application simulating an online retail store) in which an user is emulated via a Remote Browser Emulater (RBE) that simulates the same HTTP network traffic as would be seen by a real customer using the browser.

Table I: Price and Configuration of Benchmarked Servers
Provider ID Instance NameMemory (GB)CPUStorage (GB)Price
Amazon EC2 S9 c1.xLarge 7 20EC2 1680 0.98
Amazon EC2 S6 m2.2xLarge 34.2 13EC2 850 1.12
GoGrid S12X-Large 8 8vCPU 400 0.64
Rackspace S19 15GB 15 6vCPU 620 1.08
Table II: Test Setup
Date of Test22 April 2013
Workload1000 concurrent clients
Measured Metrics
  • Average Response Time (ART) (in milliseconds)
  • Maximum Response Time (MRT) (in milliseconds)
  • Number of Successful Interactions (SI)
  • Number of Timeouts (TI)

Figure I: Test Results