Showing posts with label Application Performance. Show all posts
Showing posts with label Application Performance. Show all posts

Sunday, July 20, 2014

A Checklist for Architecture & Design Review

Mostly the security requirements remain undocumented and is left to the choice or experience of the architects and developers thus leaving vulnerabilities in the application, which hackers exploit to launch an attack on the enterprise's digital assets. Security threats are on the rise and is now being considered as a Board Item as the impact of security breach is very high and could cause monetary and non monetary losses.

One of the key aspects of the IT Governance is to ensure that the investments made in software assets are optimal and there is a quantifiable return on such investments. This also means that such investment does not lead to risks that could lead to damages. Most of us are well aware that reviews play a key role in ensuring the quality of the software assets. As such, in this blog post, I have tried to come up with a checklist for reviewing the architecture and design of a software application.

While the choice of specific design best practice is interdependent on another, a careful tradeoff is necessary. For a detailed discussion on Trade off Analysis of Software Quality Attributes. Each of the checklist item listed here needs further elaboration and identification of specific practices, which will depend on the enterprise architecture and design principles of the organization.

Deployment Considerations

  • The design references the security policy of the organization and is in compliance of the same.
  • The application components are designed to comply with the various networking and other infrastructure related security restrictions like firewall rules, using appropriate secure protocols, etc.
  • The trust level with which the application accesses various resources are known and are in line with the acceptable practices.
  • The design supports the scalability requirements such as clustering, web farms, shared session management.
  • The design identifies the configuration / maintenance points, and the access to the same is manageable.
  • Communication with various local or remote components of the application is using secure protocols.
  • The design addresses performance requirements by adhering to relevant design best practices.

Application Architecture Considerations

Input Validation

  • Whether the design identifies all entry points and trust boundaries of the application.
  • Appropriate validations are in place for all inputs that comes from ourside the trust boundary.
  • The input validation strategy that the application adopted is modular and consistent.
  • The validation approach is to constrain, reject, and then sanitize input.
  • The design addresses potential canonicalization issues.
  • The design addresses SQL Injection, Cross Site Scripting and other vunerabilities
  • The design applies defense in depth to the input validation strategy by providing input validation across tiers.
Authentication
  • The design identifies the identities or roles that are used to access resources across the trust boundaries.
  • Service account or such other predefined identity requirements to, if so needed to access variuos system resources are identified and documented.
  • User credentials or authentication tokens are stored in secure manner and access to the same is appropriately controlled and managed.
  • Where the credentials are shared over the network, appropriate security protocol and encryption techniques are used.
  • Appropriate account management policies are considered.
  • In case of authentication failures, the error information displayed is minimal so that it does not reveal any clues that could make the credential guessing easier.
  • The design adopts a policy of using least-privileged accounts.
  • Password digests with salt are stored in the user store for verification.
  • Password rules are defined so that the stronger passwords are enforced.
Authorization
  • The user role design offers sufficient separation of privileges and considers authorization
  • granularity.
  • Multiple gatekeepers are envisaged for defense in depth.
  • The application’s identity is restricted in the database to access-specific stored procedures and does not have permissions to access tables directly.
  • Access to system level resources are restricted unless there is an absolute necessity.
  • Code Access Security requirements are established and considered.
Configuration Management
  • Stronger authentication and authorization is considered for access to administrration modules.
  • Secure protocols are used for remote administration of the application.
  • Configuration data is stored in a secured store and access to the same is appropriately controlled and managed
  • Least-privileged process accounts and service accounts are used.
Sensitive Data
  • Design recognizes sensitive data and considers appropriate checks and controls on the same.
  • Database connections, passwords, keys, or other secrets are not stored in plain text.
  • The design identifies the methodology to store sensitive data securely. Appropriate algorithms and
  • key sizes are used for encryption. 
  • Error logs, audit logs or such other application logs does not store sensitive data in plain text.
  • The design identifies protection mechanisms for sensitive data that is sent over the network.
Session Management
  • The contents of authentication cookies are encrypted.
  • Session lifetime is limited and times out upon expiration.
  • Session state is protected from unauthorized access.
  • Session identifiers are not passed in query strings.
Cryptography
  • Platform-level cryptography is used and it has no custom implementations.
  • The design identifies the correct cryptographic algorithm and key size for the application’s data encryption requirements.
  • The methodology to secure the encryption keys is identified and the same is in line with the acceptable best practices.
  • The design identifies and establishes the key recycle policy for the application.
Parameter Manipulation
  • All input parameters are validated including form fields, query strings, cookies, and HTTP headers.
  • Sensitive data is not passed in query strings or form fields.
  • HTTP header information is not relied on to make security decisions.
  • View state is protected using MACs.
Exception Management
  • The design outlines a standardized approach to structured exception handling across the application.
  • Application exception handling minimizes the information disclosure in case of an exception.
  • Application errors are logged to the error log, and the design provides for periodic review of such logs.
  • Sensitive data is not logged as part of the error logs, but where necessary, the same is logged with appropriate de-identification technique
Auditing and Logging
  • The design identifies the level of auditing and logging necessary for the application and identifies the key parameters to be logged and audited.
  • The design considers how to flow caller identity across multiple tiers at the operating system or application level for auditing.
  • The design identifies the storage, security, and analysis of the application log files

Sunday, April 7, 2013

NGINX - A High Performance HTTP Server

What you will find about NGINX from its website is that it is a free, open source light weight HTTP web server and a reverse proxy. Nginx doesn't rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load. We have found that it is holding up to what it says. Here is how we ended up using NGINX for one of our client and have found it meeting up to our expectation.


We were on a performance testing assignment of a game application, where much of the game contents were static flash files (of course they are dynamic as they had embedded action scripts for execution in the front end) and image files of which quite many are of sizes above 500KB. Need not to mention that this is the compressed size and as such enabling compression did not offered any performance gains. These files were served by Apache, which also serves dynamic php requests.


Our initial performance tests did indicated that for about 1000 concurrent hits, the Apache server memory consumption shot up considerably beyond 2 GB and also quite many requests were timing out. Examination of the server configuration did reveal that the server is configured to serve 256 child processes and also that it was configured to use the ‘prefork’ mpm module, which limits each child process to use only one thread. This technically limits the number of concurrent requests the Apache Server can serve to 256.


We also figured out that Apache needs to use ‘prefork’ module in order to safely serve PHP requests. Though later versions of PHP are found to be working with the other mpm module (‘worker’, which can use multiple threads per child process) many still have concerns around thread safety. The 256 client process limit is also a hard limit and if that needs to be increased, the Apache server needs to be rebuilt.


With this diagnosis, the choices we had with us was to have the static content served out of a server other than Apache and just leave Apache to serve the PHP requests. We just then thought of trying out NGINX and without wasting much time, we went ahead and implemented it. NGINX was then configured to listen on port 80 and has been configured to serve the contents from the root folder of Apache. Apache server was configured to listen at a non standard port and NGINX has also been configured to act as a reverse proxy for all php requests by getting them served processed by Apache.


We performed the tests again and have found tremendous improvement. The average latency at 1000 concurrent hits stress level has come down to under 5 seconds from about 80 seconds. We could also observe that NGINX was consuming just around 10MB. NGINX, by itself does not has any limitation on the number concurrent threads and it is constrained by the limits of the server itself and other daemons running on it.

Saturday, February 9, 2013

Stress Testing a Multi-player Game Application

I recently had an opportunity to consult for a friend of mine on stress testing a multi-player game application. This was a totally new experience for me and this blog is to detail as to how I approached this need to simulate the required amount stress and have it tested under stressed circumstances.

About the Application Architecture

The Application was developed using Flash Action Scripts with few php scripts for some of the support activities. The multi-player platform is aided by SmartFox multi-player gaming middleware. It also makes use of MySQL. The flash files containing the action scripts and a lot of images have been hosted on an Apache web server, which also hosts the PHP. All of Apache, MySQL and SmartFox have been hosted on a single cloud hosted hardware on Linux operating system.

The test approach

My first take on the test approach was to focus on simulating stress on the server and get the client out of the scope of this test. This made sense as all of the flash action scripts get executed on the client side and in reality it is typically single user using the game on a client device and the client side application has all the CPU, memory and related resources available on the client device. Thus in reality there is no multi-player stress on the client.

Given that I will now focus on the impact of the stress on the server resources, I had to understand how the client communicates with the server and about the request / response protocols and related payload. I used Fiddler to monitor the traffic out of the client device on which the game is being played and I could only see http requests for fetching the flash and image files and few of the php files. But I could not find any traffic for the SmartFox Server on http and figured out that those requests are tcp socket requests and thus not captured by fiddler.

The test tools

At this stage, it was much clear that we need to simulate stress on apache by sending in as many http requests, we need to simulate stress on SmartFox server as well over tcp sockets. We have a choice of numerous open source tools to simulate http traffic. I chose JMeter for http traffic simulation, which is open source, UI driven, easy to setup and use. It also supports multi node load simulation.

I need figure out for a tool for simulating load on sockets. I checked with SmartFox to see if they offer a stress test tool, but they don’t. A search through the SmartFox forums revealed that a custom tool is the way to go and to make it easier, we can use one of SmartFox client API libraries, which are available for .NET, Java, Action Script and few other languages. I settled for .NET route as C# is the language in which I have been working with in the recent years.

I have built a multi-threaded custom .NET tool using the SmartFox Client API to simulate the stress on the SmartFox. To my surprise, the SmartFox Client API library has not been designed to work with multi threading and SmartFox support confirmed this behaviour. I then decided redesign my custom tool to use the multi-process architecture and it worked fine..

I needed a server monitoring tool to monitor and measure various server performance parameters under stress conditions. I have found the cloud based NewRelic as the tool of choice to monitor the Linux Server hosting the game components.

The test execution

I had JMeter configured on three nodes (one being the monitoring node) and had set it up to spawn the desired number of threads. I had the custom .NET tool on another client and set it up to spawn the desired number of processes making a sequence of tcp socket requests. I also engaged couple of QA resources to play the game record the user experience under stress conditions.

The test execution went well and we could gather the needed data to form an opinion and make recommendations.


References:

Saturday, December 15, 2012

Effective vs Ineffective Security Governance

Continuing with my earlier blog on Measuring the Performance of EA, I was looking for methods and measures that can be used for measuring the effectiveness of the security program in an enterprise. I happened to read a CERT article titled as Characteristics of Effective Security Governance which contains a good comparision of what is effective and what is ineffective. I have reproduced it here in this blog for a quick reference. The original article of CERT though out dated is worth reading.

EffectiveIneffective or Absent
Board members understand that information security is critical to the organization and demand to be updated quarterly on security performance and breaches.

The board establishes a board risk committee (BRC) that understands security’s role in achieving compliance with applicable laws and regulations, and in mitigating organization risk.

The BRC conducts regular reviews of the ESP.

The board’s audit committee (BAC) ensures that annual internal and external audits of the security program are conducted and reported.
Board members do not understand that information security is in their realm of responsibility, and focus solely on corporate governance and profits.

Security is addressed adhoc, if at all.

Reviews are conducted following a major incident, if at all.

The BAC defers to internal and external auditors on the need for reviews. There is no audit plan to guide this selection.
The BRC and executive management team set an acceptable risk level. This is based on comprehensive and periodic risk assessments that take into account reasonably foreseeable internal and external security risks and magnitude of harm.

The resulting risk management plan is aligned with the entity’s strategic goals, forming the basis for the company's security policies and program.
The CISO locates boilerplate security policies, inserts the organization's name, and has the CEO sign them.

If a documented security plan exists, it does not map to the organization’s risk management or strategic plan, and does not capture security requirements for systems and other digital assets.
A cross-organizational security team comprised of senior management, general counsel, CFO, CIO, CSO and/or CRO, CPO, HR, internal communication/public relations, and procurement personnel meet regularly to discuss the effectiveness of the security program, new issues, and to coordinate the resolution of problems.CEO, CFO, general counsel, HR, procurement personnel, and business unit managers view information security as the responsibility of the CIO, CISO, and IT department and do not get involved.

The CSO handles physical and personnel security and rarely interacts with the CISO.
The general counsel rarely communicates particular compliance requirements or contractual security provisions to managers and technical staff, or communicates on an ad-hoc basis.
The CSO/CRO reports to the COO or CEO of the organization with a clear delineation of responsibilities and rights separate from the CIO.

Operational policies and procedures enforce segregation of duties (SOD) and provide checks and balances and audit trails against abuses.
The CISO reports to the CIO. The CISO is responsible for all activities associated with system and information ownership.

The CRO does not interact with the CISO or consider security to be a key risk for the organization.
Risks (including security) inherent at critical steps and decision points throughout business processes are documented and regularly reviewed.

Executive management holds business leaders responsible for carrying out risk management activities (including security) for their specific business units.

Business leaders accept the risks for their systems and authorize or deny their operation.
All security activity takes place within the security department, thus security works within a silo and is not integrated throughout the organization.

Business leaders are not aware of the risks associated with their systems or take no responsibility for their security.
Critical systems and digital assets are documented and have designated owners and defined security requirements.Systems and digital assets are not documented and not analyzed for potential security risks that can affect operations, productivity, and profitability. System and asset ownership are not clearly established.
There are documented policies and procedures for change management at both the operational and technical levels, with appropriate segregation of duties.

There is zero tolerance6 for unauthorized changes with identified consequences if these are intentional.
The change management process is absent or ineffective. It is not documented or controlled.

The CIO (instead of the CISO) ensures that all necessary changes are made to security controls. In effect, SOD is absent.
Employees are held accountable for complying with security policies and procedures. This includes reporting any malicious security breaches, intentional compromises, or suspected internal violations of policies and procedures.Policies and procedures are developed but no enforcement or accountability practices are envisioned or deployed. Monitoring of employees and checks on controls are not routinely performed.
The ESP implements sound, proven security practices and standards necessary to support business operations.No or minimal security standards and sound practices are implemented. Using these is not viewed as a business imperative.
Security products, tools, managed services, and consultants are purchased and deployed in a consistent and informed manner, using an established, documented process.

They are periodically reviewed to ensure they continue to meet security requirements and are cost effective.
Security products, tools, managed services, and consultants are purchased and deployed without any real research or performance metrics to be able to determine their ROI or effectiveness.

The organization has a false sense of security because it is using products, tools, managed services, and consultants.
The organization reviews its enterprise security program, security processes, and security’s role in business processes.

The goal of the ESP is continuous improvement.
The organization does not have an enterprise security program and does not analyze its security processes for improvement.

The organization addresses security in an ad-hoc fashion, responding to the latest threat or attack, often repeating the same mistakes.
Independent audits are conducted by the BAC. Independent reviews are conducted by the BRC. Results are discussed with leaders and the Board. Corrective actions are taken in a timely manner, and reviewed.Audits and reviews are conducted after major security incidents, if at all.


The article also lists eleven characteristics of effective security governance in addition to listing the Ten challenges to implementing an effective security governance. I would highly recommend you to read the full article.


References:
CERT’s resources on Governing for Enterprise Security


CERT and CERT Coordination Center are registered in the U.S. Patent and Trademark Office by Carnegie Mellon University

Saturday, February 18, 2012

Developers’ take away from a support project


Developers usually tend to prefer development projects over production support projects. Developers always want new challenges in terms of technology and would like to be using the latest tech tools and platforms. As most development projects offer them this advantage, they usually prefer to get away from production support projects.  But in reality, the production support projects do offer them certain key benefits, which are very much required as they move up in their career path. Let us examine some of these here.

The real life business scenarios

A software project begins with perceived business requirements as drafted by the Business Analysts and approved by customers. In most cases, the requirements are far from complete and that leads the developers to live with ambiguity giving room for more defects in the product that they develop. How much ever the software is tested, when it hits the production use, the real life business scenarios will for sure throw the software out of gear and makes it fail. Thus, those involved in the support projects get the opportunity to deal with production business scenarios which will sharpen their business / domain knowledge. Given that the world has started embracing the cloud and SaaS applications, there will be less of development and more of customization and managing the configurations. That means that the need for domain skills with the developers will rank very high amongst the SaaS providers and consumers.

Better product / domain knowledge

In product development, it is quite possible that a developer or a team of developers would be working on just a small part of a product. That means, the developers associated with development projects have very little opportunity to have complete understanding of the product. Whereas the developers involved in the production support would get opportunities to work with all parts of the product and some times across other products too. They get better visibility on the operating processes / practices associated with a use case, there by getting a better product / domain knowledge.

Solution design skill

Developers tend to believe that support projects do not have much opportunity in the solution design space, which is a myth. A production defect is far more difficult to deal with than a defect identified during the development life cycle. Resolution of a production defect involves at a high level the following steps:

  • Quickly come up with a data fix to maintain the data integrity if impacted by the defect.
  • Perform a root-cause analysis and come up with the real life scenarios that could lead to this defect being encountered.
  • Come up with an interim work around if any available to prevent it from recurring in the shorter term.
  • Identify a best solution to prevent it from recurring – This is rather challenging as the solution has to be designed within the existing product architecture, with lesser efforts and least impact to the already working software.


Each of the steps when done well in combination with the real life scenarios add tremendous value to the abilities of the developers and that will lead them towards software or solution architects. Solutions in support project see production quicker than the development projects and as such high appreciation from business teams. 

Code Re-factoring

Learning from one’s own mistake is a good way of learning. But, learning from other’s mistake is a smart way of learning. Every time a developer attempts to resolve a production defect, he might be looking into the code written by someone else and may come across many different ways of achieving a result. Taking it positively, a support developer may enjoy reading through the code written by others and pick up some better algorithms and at the same time, how not to write codes. This will for sure better their coding abilities.
The developers in the supporting a production instance of a software product will realize how important the readability of the code is and hopefully they will be making it their habits to write readable code with appropriate comments and indents.

Trouble shooting expertise

Usually software products are moved to production environment after atleast three levels of testing. A defect in production means that it has slipped through all the testing phases during development. So the scenario under which this comes to surface is not something that has been visualized during the development phases. Some of such defects would be very difficult to reproduce without which resolving it would be a nightmare. Those involved in support projects would quite often exposed to such scenarios and they will over a period gain good trouble shooting expertise. Read one of my other blog on Debugging performance problems.

Collaboration with other teams

During development phase, a software developer would be looking up to his lead for any clarifications on the work that is assigned to him and would not get exposed to other teams. Whereas, those involved in production support get to work with various other teams like the infrastructure, IT security, subject matter experts, quality assurance, business analysts, end users, third party vendors when any of their components are used, etc. This collaboration and interaction brings room for acquiring some additional skills both in technical space and also on the soft skill space.

Conclusion

Being in production, support projects facilitates the enterprise to perform its operations and earn profits on an ongoing basis. They play a vital part in the business continuity of the enterprise. As long as a production software is well supported and maintained, the IT heads would not think of replacing it unless a major technology overhaul is expected.

Of course, there are certain downsides of being support projects too. For instance, one may have to be on call to support any emergency and some times, a hard to crack defect could result in tremendous pressure and stress. 

Friday, December 16, 2011

Debugging a performance problem


As with any typical Application development, performance is mostly conveniently ignored in all the phases of the development life cycle. In spite of it being a key non functional requirement it mostly remains undocumented. It is more so, as the development, test and UAT environments may not really represent the real world production usage of the application as some of the performance problems could not be spotted earlier. Even if the application is put to load test, there are certain in the production environment, like data growth, user load, etc, which may lead to performance degradation over a period of time.

While most performance problems could easily be spotted and resolved, some could be a challenge and may require sleepless nights to resolve. A structured approach may help addressing such issues within reasonably quicker time frame. Here is a step by step approach which should work in most cases.

1.       Understand the production environment

It is important to understand the production environment thoroughly so as to identify the various hardware & networking resources and the middleware components involved in the application delivery. In a typical n-tiered application, it is possible that there could be multiple appliances and servers through which a requested passes through and get processed before responding back to the user with response. Also understand which of these components are capable of collecting logs / metrics or capable of being monitored in real time.

2.       Understand the specific feedback from the end users

Gather details like who noticed the performance degradation, at what time frame, whether it is repeating at pattern or just pulling the system down. Also understand if the entire application is slowing down or some specific application components are not performing. Also try to experience the problem first hand, sitting alongside an end user or if possible use an appropriate user credentials to experience the performance issue. The ‘who’ also matters as in certain circumstances, the application slow down may be for a user associated with some specific role as the amount of data to be processed and transmitted may differ based on the user role.

3.       Review available logs and metrics

Gather available logs and metrics data collected by various hardware and software components and look for information that could be relevant to the specific application, or more specifically the set of requests that could demonstrate the performance issue. As Logging itself could be performance overkill, it would be ideal to switch off the logs or to set it to collect only minimal logs. If that be the case, configure or effect necessary code change to achieve appropriate level of logging and then try to collect the required details by re-deploying the application on to a production equivalent environment.

4.       Isolate the problem area

This step is very important and could be very challenging too. Take the help of developers and performance and load testing tools, to simulate the problem and in the meanwhile monitor for key measurement data as the request and response pass through various hardware and software components.

By analyzing the data gathered from the application end user or out of the first hand experience, and with the available logs and metrics try to isolate the issue to a specific hardware or software component. This is best done by doing the following step by step:

a.       Trace the request from the UI to the final destination, which typically may be the Database.

b.      If the request could reach the final destination, then measure the time taken for the request to cross various physical and logical layers and look for any information that could cause the slow down. If a hardware resource is over utilized, it could so happen that the requests would be queued up or rejected after a time out. Look for such information in the logs.

c.       Then review the response cycle and try to spot the delays in the return path.

d.      Try the elimination technique whereby, the involved component one after the other from the bottom is cleared of performance bottleneck.

Experience and expertise on the application and the infrastructure architecture could come in handy to spot the problem area quickly. It is possible that there could be multiple problems whether contributing to the problem on hand or not. This situation may lead to shift in focus on different areas resulting in longer time to resolve the problem. It is important to always stay in focus and proceeding in the right direction.

5.       Simulate the problem in Test /UAT environment

Make sure that the findings are correct by simulating the problem multiple times. This will reveal much more data and help characterize the problem better.

6.       Perform reviews

If the problem area has already been isolated in any of the steps above, then narrow the scope of the review to the components involved in the isolated problem area. If not, then the scope of review is little wider and look for problem areas in every component involved in the request response cycle. Code reviews to debug performance issues require unique skills. For instance, looping blocks, disk usage, processor intensive operations could be the candidates for a detailed review. Similarly, in case of distributed application, look for too many back and forth calls to different physical tiers could easily contribute to performance problem. Good knowledge on the various third party components and Operating System APIs consumed in the application may sometimes be helpful.

When the problem is isolated to a server and the application components seem to have no issues, then it might be possible that any other services or components running on the server might cause load on the server resources there by impacting the application being reviewed. If the problem is isolated to Database server, then look for dead locks, appropriate indexes etc. Sometimes, lack of archival / data retention policies could result in the database tables growing in a much faster pace leading to performance degradation.

7.       Identify the root cause

By now one should have identified the specific application procedure or function that could be the cause of the problem on hand. Have it validated by doing more simulations and tests in environments equivalent to production.

8.       Come up with solution

It is just not over yet, as root cause identification should be followed by a solution. Sometimes, the solution to the problem may require change in the architecture and might have a larger impact on the entire application. An ideal solution should prevent the problem from recurring and at the same time it should not introduce newer problems and should require minimal efforts. Alternatively if the ideal solution is not a possibility with various constraints, a break-fix solution should be offered so that the business continues and also plan for having the ideal solution implemented in the longer term.

Hope this one is useful read for those of you in production support. Feel free to share your thoughts on this subject in the form of comments.

Tuesday, November 23, 2010

High Volume Transaction Processing

Came across a presentation on payment processing by Voca at InfoQ. Just felt that the key design principles used by voca may be of interest to those generally deal with the problems of high volume transaction processing. These principles are:

1. Minimize movement of data: Movement of data across physical and logical layers could result in heavy network traffic and also necessitates a complex transaction and exception management over multiple layers. The idea is to whenever a set of transactions need to be processed for certain validation or transformation, do it within the database instead of moving such set of data to other layers and then bringing it back to the database.

2. Task parallelization: This an area which most of us might have not considered. Using the Work Manager / worker architectural pattern, the tasks can be executed by multiple nodes, which could be separate physical nodes with the ability to add more nodes when demanded.

3. Physically partition data: When the all in one database hits its scalability limits, the option is to partition the database. It would be ideal to envision the possible physical partitioning of the data and implement it right from the beginning.

4. Optimized reads and writes of volatile data: This is one principle, which most of us adhere to by having necessary indexes, managing the fetch sizes, etc.

5. Minimize contention: Contention is certainly an avoidable thing as this will cause the workers wait for release of the resources or data and directly impacting performance. One option is not to wait for the release of resources and instead, look for an alternate source of data / resources. Of course this will require a thought through design of multiple synchronized instances providing the data / resource.

6. Asynchronous decoupling: Usage of a middleware like message queue can certainly help in this area and there by improving the response for the consuming applications.

7. Keep Complex business logic outside database: Considering the limited scalability options of the database, it would be ideal to minimize the work load for the database by shifting it to other tiers where possible.

8. Caching frequently accessed data: There is no point in traversing through multiple physical and logical layers to fetch the same data multiple times. Caching such data in appropriate layers will certainly will leave the network and other resources for other useful tasks.

The referenced presentation is available on InfoQ website at http://www.infoq.com/presentations/qcon-voca-architecture-spring