In the competitive business world, the responsiveness of business applications plays a vital role in business success. As volumes of business users and transactions grow, responsiveness becomes a concern if the underlying environment does not support the load. This can even lead to business downtime in persistent load scenarios as a backlog of requests builds up and servers become chokepoints.
Many organizations are moving applications and workloads to the cloud. This requires them to choose cloud services with appropriate specifications to meet their applications’ performance needs. While cloud services provide essential functionality, usage patterns make demands on those services. That’s why performance parameters are necessary. Many cloud providers abstract service parameters with SLA indicators. But certain performance parameters need explicit consideration and due diligence. Application performance depends upon several layers: application design, application architecture, algorithm efficiency, infrastructure, etc. In this blog, I’ll describe some of the cloud services that are specific to infrastructure and the performance parameters you should consider when provisioning them. I’ll discuss performance considerations of value-added services in my next blog.
Choosing a particular storage service option requires you to consider many parameters: space capacity, expandability, maximum limit, backup facility and retention period. Table 1 lists some of the important parameters that influence storage performance and directly impact the performance of business use cases involving storage interaction—for example, when using OpenText Content Server’s extended file system (EFS) and database storage.
For performance-critical storage, the following parameters must be explicitly provisioned:
Shared network drives, which are used for sharing file systems across the cluster nodes and parameters described in Table 1, are important. However, for object storage—used to maintain massive unstructured content and less frequently used content, typically at a lower price—the throughput parameter is important.
Virtual machine (VM) instance type and limits
The VM instance type is a distinct type designed to offer service that’s specific to a particular usage: general purpose, storage optimized or compute-optimized. In general, application servers are compute-intensive, but memory optimization is also required if the application’s memory footprint is high—for example, for an AppWorks server with more than 50 service containers. Database systems need storage, compute and memory optimization.
When you combine two or more cloud services, limits associated with given VM instance typescan become performance bottlenecks. For example, storage performance of a database system will not reach its full potential—even if you pay for faster storage—if the IOPS limit of the VM instance type is less than that of the storage.
In the case of container-based deployments, pay attention to pod size, node VM type instance and limits associated with Kubernetes services, such as max node pool size, max pods per node, etc.
Geographical hosting location
The hosting server location should be close to most of your business users. In general, there are more business users than other user types (application admins, infrastructure admins and so on). So you should optimize performance for business users.
When you distribute servers of an application (such as application servers and database servers)across geographies, you add significant performance overheads in server-to-server communication. This, in turn, increases the overall response time of user transactions. Because of this, keep all servers in a single geographic location.
Cloud to on-premises integration network
In a hybrid cloud deployment, application services involve orchestration and data exchange between cloud-based and on-premises systems. This typically involves a connection between the cloud and an on-premises data center. Figure 2 depicts an example scenario of an insurance business user viewing a policy document. The policy view time can increase because of cloud-to-on-premises round trip network performance overheads.
A network with low network latency (< 5ms) reduces performance overheads when communicating with on-premises services.
Ensuring adequate network bandwidth will allow more parallel application sessions to communicate and faster download and upload times with on-premises systems.
Some cloud providers offer direct connectivity to on-premises systems—for example, MPLS links or fast fiber optic connectivity—which gives the best performance. You can establish connectivity from the cloud to on-premises systems through the internet. Therefore, your internet subscription should consider the described network parameters.
Author: Prasad Kukkala, Senior Principal Architect, Professional Services – Center of Excellence, has 21+ years of work experience in performance architecture of large-scale enterprise and cloud systems involving OpenText products globally, specifically focusing on helping customers in adopting performance best practices for cloud and enterprise solutions.