SlideShare a Scribd company logo
CLOUD SECURITY
Security Overview – Cloud Security Challenges – Data Security –Application Security – Virtual
Machine Security - Cloud Infrastructure security: network, host and application level – Azure
Firewall-Load Balancer- Traffic Manager- Network Security Groups and Application Security
Groups
Security Overview
Security in cloud computing is a major concern. Data in cloud should be stored in encrypted
form. To restrict client from accessing the shared data directly, proxy and brokerage services
should be employed.
Security Planning
Before deploying a particular resource to cloud, one should need to analyse several aspects of
the resource such as:
 Select resource that needs to move to the cloud and analyse its sensitivity to risk.
 Consider cloud service models such as IaaS, PaaS, and SaaS. These models require
customer to be responsible for security at different levels of service.
 Consider the cloud type to be used such as public, private, community or hybrid.
 Understand the cloud service provider's system about data storage and its transfer into
and out of the cloud.
The risk in cloud deployment mainly depends upon the service models and cloud types.
Understanding Security of Cloud
Security Boundaries
A particular service model defines the boundary between the responsibilities of service
provider and customer. Cloud Security Alliance (CSA) stack model defines the boundaries
between each service model and shows how different functional units relate to each other. The
following diagram shows the CSA stack model:
Key Points to CSA Model
 IaaS is the most basic level of service with PaaS and SaaS next two above levels of
services.
 Moving upwards, each of the service inherits capabilities and security concerns of the
model beneath.
 IaaS provides the infrastructure, PaaS provides platform development environment, and
SaaS provides operating environment.
 IaaS has the least level of integrated functionalities and integrated security while SaaS
has the most.
 This model describes the security boundaries at which cloud service provider's
responsibilities end and the customer's responsibilities begin.
 Any security mechanism below the security boundary must be built into the system and
should be maintained by the customer.
Although each service model has security mechanism, the security needs also depend upon
where these services are located, in private, public, hybrid or community cloud.
Understanding Data Security
Since all the data is transferred using Internet, data security is of major concern in the cloud.
Here are key mechanisms for protecting data.
 Access Control
 Auditing
 Authentication
 Authorization
All of the service models should incorporate security mechanism operating in all above-
mentioned areas.
Isolated Access to Data
Since data stored in cloud can be accessed from anywhere, we must have a mechanism to
isolate data and protect it from client’s direct access.
Brokered Cloud Storage Access is an approach for isolating storage in the cloud. In this
approach, two services are created:
 A broker with full access to storage but no access to client.
 A proxy with no access to storage but access to both client and broker.
Working Of Brokered Cloud Storage Access System
When the client issues request to access data:
 The client data request goes to the external service interface of proxy.
 The proxy forwards the request to the broker.
 The broker requests the data from cloud storage system.
 The cloud storage system returns the data to the broker.
 The broker returns the data to proxy.
 Finally the proxy sends the data to the client.
All of the above steps are shown in the following diagram:
Encryption
Encryption helps to protect data from being compromised. It protects data that is being
transferred as well as data stored in the cloud. Although encryption helps to protect data from
any unauthorized access, it does not prevent data loss.
Cloud Security Challenges
Cloud Computing is a type of technology that provides remote services on the internet to
manage, access, and store data rather than storing it on Servers or local drives. This technology
is also known as Server less technology. Here the data can be anything like Image, Audio, video,
documents, files, etc.
Need of Cloud Computing :
Before using Cloud Computing, most of the large as well as small IT companies use traditional
methods i.e. they store data in Server, and they need a separate Server room for that. In that
Server Room, there should be a database server, mail server, firewalls, routers, modems, high
net speed devices, etc. For that IT companies have to spend lots of money. In order to reduce all
the problems with cost Cloud computing come into existence and most companies shift to this
technology.
Cloud Security:
It is a set of control-based technologies & policies adapted to stick to regulatory compliances,
rules & protect data application and cloud technology infrastructure. Because of cloud's nature
of sharing resources, cloud security gives particular concern to identity management, privacy &
access control. So the data in the cloud should have to be stored in an encrypted form. With the
increase in the number of organizations using cloud technology for a data operation, proper
security and other potentially vulnerable areas became a priority for organizations contracting
with cloud providers. Cloud computing security processes the security control in cloud &
provides customer data security, privacy & compliance with necessary regulations.
Security Planning for Cloud
Before using cloud technology, users should need to analyze several aspects.
These are:
 Analyze the sensitivity to risks of user's resources.
 The cloud service models require the customer to be responsible for security at various
levels of service.
 Understand the data storage and transfer mechanism provided by the cloud service
provider.
 Consider proper cloud type to be used.
Cloud Security Controls
Cloud security becomes effective only if the defensive implementation remains strong.
There are many types of control for cloud security architecture; the categories are listed below:
1. Detective Control: are meant to detect and react instantly & appropriately to any
incident.
2. Preventive Control: strengthen the system against any incident or attack by actually
eliminating the vulnerabilities.
3. Deterrent Control is meant to reduce attack on cloud system; it reduces the threat level
by giving a warning sign.
4. Corrective Control reduces the consequences of an incident by controlling/limiting the
damage. Restoring system backup is an example of such type.
Security Issues in Cloud Computing :
There is no doubt that Cloud Computing provides various Advantages but there are also
some security issues in cloud computing. Below are some following Security Issues in
Cloud Computing as follows.
1. Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data
Leakage. As we know that our sensitive data is in the hands of Somebody else, and we
don’t have full control over our database. So if the security of cloud service is to break
by hackers then it may be possible that hackers will get access to our sensitive data or
personal files.
2. Interference of Hackers and Insecure API’s –
As we know if we are talking about the cloud and its services it means we are talking
about the Internet. Also, we know that the easiest way to communicate with Cloud is
using API. So it is important to protect the Interface’s and API’s which are used by an
external user. But also in cloud computing, few services are available in the public
domain. An is the vulnerable part of Cloud Computing because it may be possible that
these services are accessed by some third parties. So it may be possible that with the
help of these services hackers can easily hack or harm our data.
3. User Account Hijacking –
Account Hijacking is the most serious security issue in Cloud Computing. If somehow
the Account of User or an Organization is hijacked by Hacker. Then the hacker has full
authority to perform Unauthorized Activities.
4. Changing Service Provider –
Vendor lock In is also an important Security issue in Cloud Computing. Many
organizations will face different problems while shifting from one vendor to another.
For example, An Organization wants to shift from AWS Cloud to Google
Cloud Services then they ace various problem’s like shifting of all data, also both cloud
services have different techniques and functions, so they also face problems regarding
that. Also, it may be possible that the charges of AWS are different from Google Cloud,
etc.
5. Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to use a
feature, etc. are the main problems caused in IT Company who doesn’t have skilled
Employee. So it requires a skilled person to work with cloud Computing.
6. Denial of Service (DoS) attack –
This type of attack occurs when the system receives too much traffic. Mostly DoS
attacks occur in large organizations such as the banking sector, government sector, etc.
When a DoS attack occurs data is lost. So in order to recover data, it requires a great
amount of money as well as time to handle it.
Data Security
Cloud data security refers to the technologies, policies, services and security controls
that protect any type of data in the cloud from loss, leakage or misuse through breaches,
exfiltration and unauthorized access. A robust cloud data security strategy should include:
 Ensuring the security and privacy of data across networks as well as within applications,
containers, workloads and other cloud environments
 Controlling data access for all users, devices and software
 Providing complete visibility into all data on the network
The cloud data protection and security strategy must also protect data of all types. This
includes:
 Data in use: Securing data being used by an application or endpoint through user
authentication and access control
 Data in motion: Ensuring the safe transmission of sensitive, confidential or proprietary
data while it moves across the network through encryption and/or other email and
messaging security measures
 Data at rest: Protecting data that is being stored on any network location, including
the cloud, through access restrictions and user authentication
Cloud computing threats to data security
While cybersecurity threats that apply to on-premises infrastructure also extend to cloud
computing, the cloud brings additional data security threats. Here are some of the common
ones:
Unsecure application programming interfaces (APIs)—many cloud services and
applications rely on APIs for functionalities such as authentication and access, but these
interfaces often have security weaknesses such as misconfigurations, opening the door to
compromises.
Account hijacking or takeover—many people use weak passwords or reuse compromised
passwords, which gives cyberattackers easy access to cloud accounts.
Insider threats—while these are not unique to the cloud, the lack of visibility into the cloud
ecosystem increases the risk of insider threats, whether the insiders are gaining unauthorized
access to data with malicious intent or are inadvertently sharing or storing sensitive data via
the cloud.
Types of data security
Encryption
Using an algorithm to transform normal text characters into an unreadable format, encryption
keys scramble data so that only authorized users can read it. File and database encryption
solutions serve as a final line of defense for sensitive volumes by obscuring their contents
through encryption or tokenization. Most solutions also include security key management
capabilities.
Data Erasure
More secure than standard data wiping, data erasure uses software to completely overwrite
data on any storage device. It verifies that the data is unrecoverable.
Data Masking
By masking data, organizations can allow teams to develop applications or train people using
real data. It masks personally identifiable information (PII) where necessary so that
development can occur in environments that are compliant.
Data Resiliency
Resiliency is determined by how well an organization endures or recovers from any type of
failure – from hardware problems to power shortages and other events that affect data
availability (PDF, 256 KB). Speed of recovery is critical to minimize impact.
Safeguards for data security in cloud computing
Data security in the cloud starts with identity governance. You need a comprehensive,
consolidated view of data access across your on-premises and cloud platforms and
workloads. Identity governance provides:
Visibility—the lack of visibility results in ineffective access control, increasing both your
risks and costs.
Federated access—this eliminates manual maintenance of separate identities by leveraging
your Active Directory or other system of record.
Monitoring—you need a way to determine if the access to cloud data is authorized and
appropriate.
Governance best practices include automating processes to reduce the burden on your IT
team, as well as auditing your security tools routinely to ensure continuous risk mitigation as
your environment evolves.
In addition to governance, here are some other recommended data security safeguards for
cloud computing:
Deploy encryption. Ensure that sensitive and critical data, such as PII and intellectual
property, is encrypted both in transit and at rest. Not all vendors offer encryption, and you
should consider implementing a third-party encryption solution for added protection.
Back up the data. While vendors have their own backup procedures, it’s essential to back up
your cloud data locally as well. Use the 3-2-1 rule for data backup: Keep at least three copies,
store them on at least two different media, and keep at least one backup offsite (in the case of
the cloud, the offsite backup could be the one executed by the vendor).
Implement identity and access management (IAM). Your IAM technology and policies
ensure that the right people have appropriate access to data, and this framework needs to
encompass your cloud environment. Besides identity governance, IAM components include
access management (such as single sign-on, or SSO) and privileged access management.
Manage your password policies. Poor password hygiene is frequently the cause of data
breaches and other security incidents. Use password management solutions to make it simple
for your employees and other end users to maintain secure password practices.
Adopt multi-factor authentication (MFA). In addition to using secure password practices,
MFA is a good way to mitigate the risk of compromised credentials. It creates an extra hurdle
that threat actors must overcome as they try to gain entry to your cloud accounts.
Business Risks to Storing Data in the Cloud
Though storing data within the cloud offers organizations many important benefits, this
environment is not without challenges. Here are some risks businesses may face of storing
data in the cloud without the proper security measures in place:
1. Data breaches
Data breaches occur differently in the cloud than in on-premises attacks. Malware is less
relevant. Instead, attackers exploit misconfigurations, inadequate access, stolen
credentials and other vulnerabilities.
2. Misconfigurations
Misconfigurations are the No. 1 vulnerability in a cloud environment and can lead to
overly permissive privileges on accounts, insufficient logging and other security gaps that
expose organizations to cloud breaches, insider threats and adversaries who leverage
vulnerabilities to gain access to data.
3. Unsecured APIs
Businesses often use APIs to connect services and transfer data, either internally or to
partners, suppliers, customers and others. Because APIs turn certain types of data into
endpoints, changes to data policies or privilege levels can increase the risk of unauthorized
access to more data than the host intended.
4. Access control/unauthorized access
Organizations using multi-cloud environments tend to rely on default access controls of
their cloud providers, which becomes an issue particularly in a multi-cloud or hybrid cloud
environment. Inside threats can do a great deal of damage with their privileged access,
knowledge of where to strike, and ability to hide their tracks.
Cloud Data Security Best Practices
To ensure the security of their data, organizations must adopt a comprehensive
cybersecurity strategy that addresses data vulnerabilities specific to the cloud.
Key elements of a robust cloud data security strategy include:
1. Leverage advanced encryption capabilities
One effective way to protect data is to encrypt it. Cloud encryption transforms data from
plain text into an unreadable format before it enters the cloud. Data should be encrypted
both in transit and at rest.
There are different out-of-the-box encryption capabilities offered by cloud service
providers for data stored in block and object storage services. To protect the security of
data-in-transit, connections to cloud storage services should be made using encrypted
HTTPS/TLS connections.
Data encryption is by default enabled in cloud platforms using platform-managed
encryption keys. However, customers can gain additional control over this by bringing
their own keys and managing them centrally via encryption key management services in
the cloud. For organizations with stricter security standards and compliance requirements,
they can implement native hardware security module (HSM)-enabled key management
services or even third-party services for protecting data encryption keys.
2. Implement a data loss prevention (DLP) tool.
Data loss prevention (DLP) is part of a company’s overall security strategy that focuses
on detecting and preventing the loss, leakage or misuse of data through breaches,
exfiltration and unauthorized access.
A cloud DLP is specifically designed to protect those organizations that leverage cloud
repositories for data storage.
3. Enable unified visibility across private, hybrid and multi-cloud environments.
Unified discovery and visibility of multi-cloud environments, along with continuous
intelligent monitoring of all cloud resources are essential in a cloud security solution. That
unified visibility must be able to detect misconfigurations, vulnerabilities and data security
threats, while providing actionable insights and guided remediation.
4. Ensure security posture and governance.
Another key element of data security is having the proper security policy and governance
in place that enforces golden cloud security standards, while meeting industry and
government regulations across the entire infrastructure. A cloud security posture
management (CSPM) solution that detects and prevents misconfigurations and control
plane threats is essential for eliminating blind spots and ensuring compliance across
clouds, applications and workloads.
5. Strengthen identity and access management (IAM).
Identity and access management (IAM) helps organizations streamline and automate
identity and access management tasks and enable more granular access controls and
privileges. With an IAM solution, IT teams no longer need to manually assign access
controls, monitor and update privileges, or deprovision accounts. Organizations can also
enable a single sign-on (SSO) to authenticate the user’s identity and allow access to
multiple applications and websites with just one set of credentials.
When it comes to IAM controls, the rule of thumb is to follow the principle of least
privilege, which means allowing required users to access only the data and cloud resources
they need to perform their work.
6. Enable cloud workload protection.
Cloud workloads increase the attack surface exponentially. Protecting workloads requires
visibility and discovery of each workload and container events, while securing the entire
cloud-native stack, on any cloud, across all workloads, containers, Kubernetes and
serverless applications. Cloud workload protection (CWP) includes vulnerability scanning
and management, and breach protection for workloads, including containers, Kubernetes
and serverless functions, while enabling organizations to build, run and secure cloud
applications from development to production.
Application Security
Cloud application security is the process of securing cloud-based software applications
throughout the development lifecycle. It includes application-level policies, tools,
technologies and rules to maintain visibility into all cloud-based assets, protect cloud-
based applications from cyberattacks and limit access only to authorized users.
Cloud application security is crucially important for organizations that are operating in a
multi-cloud environment hosted by a third-party cloud provider such as Amazon,
Microsoft or Google, as well as those that use collaborative web applications such as
Slack, Microsoft Teams or Box. These services or applications, while transformational in
nature to the business and its workforce, dramatically increase the attack surface,
providing many new points of access for adversaries to enter the network and unleash
attacks.
The Need For Cloud Application Security
Modern enterprise workloads are spread across a wide variety of cloud platforms ranging from
suites of SaaS products like Google Workspaces and Microsoft 365 to custom cloud-native
applications running across multiple hyper-scale cloud service providers.
As a result, network perimeters are more dynamic than ever and critical data and workloads
face threats that simply didn’t exist a decade ago. Enterprises must be able to ensure workloads
are protected wherever they run. Additionally, cloud computing adds a new wrinkle to data
sovereignty and data governance that can complicate compliance.
Individual cloud service providers often offer security solutions for their platforms, but in a
world where multi-cloud is the norm — a Gartner survey indicated over 80% of public cloud
users use multiple providers — solutions that can protect an enterprise end-to-end across all
platforms are needed.
Cloud Application Security Threats
 Account hijacking: Weak passwords and data breaches often lead to legitimate accounts being
compromised. If an attacker compromises an account, they can gain access to sensitive data
and completely control cloud assets.
 Credential exposure: A corollary to account hijacking is credential exposure. As the
SolarWinds security breach demonstrated, exposing credentials in the cloud (GitHub in this
case) can lead to account hijacking and a wide range of sophisticated long-term attacks.
 Bots and automated attacks: Bots and malicious scanners are an unfortunate reality of
exposing any service to the Internet. As a result, any cloud service or web-facing application
must account for the threats posed by automated attacks.
 Insecure APIs: APIs are one of the most common mechanisms for sharing data — both
internally and externally — in modern cloud environments. However, because APIs are often
both feature and data- rich, they are a popular attack surface for hackers.
 Oversharing of data: Cloud data storage makes it trivial to share data using URLs. This
greatly streamlines enterprise collaboration. However, it also increases the likelihood of assets
being accessed by unauthorized or malicious users.
 DoS attacks: Denial of Service (DoS) attacks against large enterprises have been a
cybersecurity threat for a long time. With so many modern organizations dependent on public
cloud services, attacks against cloud service providers can now have an exponential impact.
 Misconfiguration: One of the most common reasons for data breaches is misconfigurations.
The frequency of misconfiguration in the cloud is due in large part to the complexity involved
in configuration management (which leads to disjointed manual processes) and access control
across cloud providers.
 Phishing and social engineering: Phishing and social engineering attacks that exploit the
human side of enterprise security are one of the most frequently exploited attack vectors.
 Complexity and lack of visibility: Because many enterprise environments are multi-cloud,
the complexity of configuration management, granular monitoring across platforms, and access
control often lead to disjointed workflows that involve manual configuration and limit visibility
which further exacerbates cloud security challenges.
Types Of Cloud Application Security Solutions
There is no shortage of security solutions designed to help enterprises mitigate cloud
application security threats. For example, cloud access security brokers (CASBs) act as a
gatekeeper to cloud services and enforce granular security policies. Similarly, web application
firewalls (WAFs) and runtime application self-protection (RASP) to protect web apps, APIs,
and individual applications.
Additionally, many enterprises continue to leverage point appliances to implement firewalling,
IPS/IDS, URL filtering, and threat detection. However, these solutions aren’t ideal for the
modern cloud-native infrastructure as they are inherently inflexible and tied to specific
locations.
Web Application & API Protection (WAAP) has emerged as a more holistic and cloud-native
solution that combines — and enhances — the functionality of WAFs, RASP, and traditional
point solutions in a holistic multi-cloud platform. With WAAP, enterprises can automate and
scale modern application security in a way legacy tooling simply cannot.
Cloud Application Security Best Practices
Enterprises must take a holistic approach to improve their cloud security posture. There’s no
one-size-fits-all approach that will work for every organization, but there are several cloud
application security best practices that all enterprises can apply.
Here are some of the most important cloud app security best practices enterprises should
consider:
 Leverage MFA: Multi Factor authentication (MFA) is one of the most effective mechanisms
for limiting the risk of account compromise.
 Account for the human aspect: User error is one of the most common causes of data breaches.
Taking a two-pronged approach of user education and implementing security tooling such as
URL filters, anti-malware, and intelligent firewalls can significantly reduce the risk of social
engineering leading to a catastrophic security issue.
 Automate everything: Enterprises should automate cloud application monitoring, incident
response, and configuration as much as possible. Manual workflows are error-prone and a
common cause for oversight or leaked data.
 Enforce the principle of least privilege: User accounts and applications should be configured
to only access the assets required for their business function. Security policies should enforce
the principle of least privilege across all cloud platforms. Leveraging enterprise identity
management solutions and SSO (single-sign-on) can help enterprises scale this cloud
application security best practice.
 Use holistic multi-cloud solutions: Modern enterprise infrastructure is complex and
enterprises need complete visibility to ensure a strong security posture across all platforms.
This means choosing visibility and security tooling that isn’t inherently tied to a given location
(e.g. point appliances) or cloud vendor is essential.
 Don’t depend on signature matching alone: Many threat detection engines and anti-malware
solutions depend on signature matching and basic business logic to detect malicious behavior.
While detecting known threats is useful, in practice depending only on basic signature
matching for threat detection is a recipe for false positives that can lead to alert fatigue and
unnecessarily slow down operations. Additionally, reliance on signature mapping alone means
enterprises have little to no protection against zero-day threats that don’t already have a known
signature. Security tooling that can analyze behavior in-context, for example by using an AI
engine, can both reduce false positives and decrease the odds of a zero-day threat being
exploited.
Virtual Machine Security
Virtualized security, or security virtualization, refers to security solutions that are software-
based and designed to work within a virtualized IT environment. This differs from traditional,
hardware-based network security, which is static and runs on devices such as traditional
firewalls, routers, and switches.
In contrast to hardware-based security, virtualized security is flexible and dynamic. Instead of
being tied to a device, it can be deployed anywhere in the network and is often cloud-based.
This is key for virtualized networks, in which operators spin up workloads and applications
dynamically; virtualized security allows security services and functions to move around with
those dynamically created workloads.
Cloud security considerations (such as isolating multitenant environments in public cloud
environments) are also important to virtualized security. The flexibility of virtualized security
is helpful for securing hybrid and multi-cloud environments, where data and workloads
migrate around a complicated ecosystem involving multiple vendors.
Benefits of virtualized security
Virtualized security is now effectively necessary to keep up with the complex security demands
of a virtualized network, plus it’s more flexible and efficient than traditional physical security.
Here are some of its specific benefits:
 Cost-effectiveness: Virtualized security allows an enterprise to maintain a secure
network without a large increase in spending on expensive proprietary hardware.
Pricing for cloud-based virtualized security services is often determined by usage,
which can mean additional savings for organizations that use resources efficiently.
 Flexibility: Virtualized security functions can follow workloads anywhere, which
is crucial in a virtualized environment. It provides protection across multiple data
centers and in multi-cloud and hybrid cloud environments, allowing an
organization to take advantage of the full benefits of virtualization while also
keeping data secure.
 Operational efficiency:Quicker and easier to deploy than hardware-based
security, virtualized security doesn’t require IT teams to set up and configure
multiple hardware appliances. Instead, they can set up security systems through
centralized software, enabling rapid scaling. Using software to run security
technology also allows security tasks to be automated, freeing up additional time
for IT teams.
 Regulatory compliance:Traditional hardware-based security is static and unable
to keep up with the demands of a virtualized network, making virtualized security
a necessity for organizations that need to maintain regulatory compliance.
How does virtualized security work?
Virtualized security can take the functions of traditional security hardware appliances (such as
firewalls and antivirus protection) and deploy them via software. In addition, virtualized
security can also perform additional security functions. These functions are only possible due
to the advantages of virtualization, and are designed to address the specific security needs of a
virtualized environment.
For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-
segmentation to reduce the potential attack surface.
Virtualized security can be implemented as an application directly on a bare metal
hypervisor (a position it can leverage to provide effective application monitoring) or as a
hosted service on a virtual machine. In either case, it can be quickly deployed where it is most
effective, unlike physical security, which is tied to a specific device.
What are the risks of virtualized security?
The increased complexity of virtualized security can be a challenge for IT, which in turn leads
to increased risk. It’s harder to keep track of workloads and applications in a virtualized
environment as they migrate across servers, which makes it more difficult to monitor security
policies and configurations. And the ease of spinning up virtual machines can also contribute
to security holes.
It’s important to note, however, that many of these risks are already present in a virtualized
environment, whether security services are virtualized or not. Following enterprise
security best practices (such as spinning down virtual machines when they are no longer needed
and using automation to keep security policies up to date) can help mitigate such risks.
How is physical security different from virtualized security?
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The
traditional approach depends on devices deployed at strategic points across a network and is
often focused on protecting the network perimeter (as with a traditional firewall). However, the
perimeter of a virtualized, cloud-based network is necessarily porous and workloads and
applications are dynamically created, increasing the potential attack surface.
Traditional security also relies heavily upon port and protocol filtering, an approach that’s
ineffective in a virtualized environment where addresses and ports are assigned dynamically.
In such an environment, traditional hardware-based security is not enough; a cloud-based
network requires virtualized security that can move around the network along with workloads
and applications.
What are the different types of virtualized security?
There are many features and types of virtualized security, encompassing network
security, application security, and cloud security. Some virtualized security technologies are
essentially updated, virtualized versions of traditional security technology (such as next-
generation firewalls). Others are innovative new technologies that are built into the very fabric
of the virtualized network.
Some common types of virtualized security features include:
 Segmentation, or making specific resources available only to specific applications
and users. This typically takes the form of controlling traffic between different
network segments or tiers.
 Micro-segmentation, or applying specific security policies at the workload level
to create granular secure zones and limit an attacker’s ability to move through the
network. Micro-segmentation divides a data center into segments and allows IT
teams to define security controls for each segment individually, bolstering the data
center’s resistance to attack.
 Isolation, or separating independent workloads and applications on the same
network. This is particularly important in a multitenant public cloud environment,
and can also be used to isolate virtual networks from the underlying physical
infrastructure, protecting the infrastructure from attack.
Cloud infrastructure security
Cloud infrastructure security is the practice of securing resources deployed in a cloud
environment and supporting systems.
Public cloud infrastructure is, in many ways, more vulnerable than on-premises infrastructure
because it can easily be exposed to public networks, and is not located behind a secure network
perimeter. However, in a private or hybrid cloud, security is still a challenge, as there are
multiple security concerns due to the highly automated nature of the environment, and
numerous integration points with public cloud systems.
Cloud infrastructure is made up of at least 7 basic components, including user accounts, servers,
storage systems, and networks. Cloud environments are dynamic, with short-lived resources
created and terminated many times per day. This means each of these building blocks must be
secured in an automated and systematic manner. Read on to learn best practices that can help
you secure each of these components.
Securing Public, Private, and Hybrid Clouds
Cloud security has different implications in different cloud infrastructure models. Here are
considerations for security in each of the three popular models—public cloud, private cloud,
and hybrid cloud.
Public Cloud Security
In a public cloud, the cloud provider takes responsibility for securing the infrastructure, and
provides tools that allow the organization to secure its workloads. Your organization is
responsible for:
 Securing workloads and data, fully complying with relevant compliance standards, and
ensuring all activity is logged to enable auditing.
 Ensuring cloud configurations remain secure, and any new resources on the cloud are similarly
secured, using automated tools such as a Cloud Security Posture Management (CSPM)
platform.
 Understanding which service level agreements (SLA), supplied by your cloud provider, deliver
relevant services and monitoring.
 If you use services, machine images, container images, or other software from third-party
providers, performing due diligence on their security measures and replacing providers if they
are insufficient.
Private Cloud Security
The private cloud model gives you control over all layers of the stack. These resources are
commonly not exposed to the public Internet. This means that you can achieve a certain level
of security using traditional mechanisms that protect the corporate network perimeter.
However, there are additional measures you should take to secure your private cloud:
 Use cloud native monitoring tools to gain visibility over any anomalous behavior in your
running workloads.
 Monitor privileged accounts and resources for suspicious activity to detect insider threats.
Malicious users or compromised accounts can have severe consequences in a private cloud,
because of the ease at which resources can be automated.
 Ensure complete isolation between virtual machines, containers, and host operating systems,
to ensure that compromise of a VM or container does not allow compromise of the entire host.
 Virtual machines should have dedicated NICs or VLANs, and hosts should communicate over
the network using a separate network interface.
 Plan ahead and prepare for hybrid cloud by putting security measures in place to ensure that
you can securely integrate with public cloud services
Hybrid Cloud Security
Hybrid clouds are a combination of on-premise data center, public cloud, and private cloud.
The following security considerations are important in a hybrid cloud environment:
 Ensure public cloud systems are secured using all the best practices.
 Private cloud systems should follow private cloud security best practices, as well as traditional
network security measures for the local data center.
 Avoid separate security strategies and tools in each environment—adopt a single security
framework that can provide controls across the hybrid environment.
 Identify all integration points between environments, treat them as high-risk components and
ensure they are secured.
Securing 7 Key Components of Your Cloud Infrastructure
Here are key best practices to securing the key components of a typical cloud environment.
Accounts
Service accounts in the cloud are typically privileged accounts, which may have access to
critical infrastructure. Once compromised, attackers have access to cloud networks and can
access sensitive resources and data.
Service accounts may be created automatically when you create new cloud resources, scale
cloud resources, or stand up environments using infrastructure as code (IaC). The new accounts
may have default settings, which in some cases means weak or no authentication.
Use identity and access management (IAM) to set policies controlling access and
authentication to service accounts. Use a cloud configuration monitoring tool to automatically
detect and remediate non-secured accounts. Finally, monitor usage of sensitive accounts to
detect suspicious activity and respond.
Servers
While a cloud environment is virtualized, behind the scenes it is made up of physical hardware
deployed at multiple geographical locations. This includes physical servers, storage devices,
load balancers, and network equipment like switches and routers.
Here are a few ways to secure a cloud server, typically deployed using a compute service like
Amazon EC2:
 Control inbound and outbound communication—your server should only be allowed to
connect to networks, and specific IP ranges needed for its operations. For example, a database
server should not have access to the public internet, or any other IP, except those of the
application instances it serves.
 Encrypt communications—whether communications go over public networks or within a
secure private network, they should be encrypted to avoid man in the middle (MiTM) attacks.
Never use unsecured protocols like Telnet or FTP. Transmit all data over HTTPS, or other
secure protocols like SCP (Secure Copy) or SFTP (Secure FTP).
 Use SSH keys—avoid accessing cloud servers using passwords, because they are vulnerable
to brute force attacks and can easily be compromised. Use SSH keys, which leverage
public/private key cryptography for more secure access.
 Minimize privileges—only users or service roles that absolutely need access to a server should
be granted access. Carefully control the access level of each account to ensure it can only access
the specific files and folders, and perform specific operations, needed for their role. Avoid
using the root user—any operation should be performed using identified user accounts.
Hypervisors
A hypervisor runs on physical hardware, and makes it possible to run several virtual machines
(VMs), each with a separate operating system.
All cloud systems are based on hypervisors. Therefore, hypervisors are a key security concern,
because compromise of the hypervisor (an attack known as hyperjacking) gives the attacker
access to all hosts and virtual machines running on it.
In public cloud systems, hypervisor security is the responsibility of the cloud provider, so you
don’t need to concern yourself with it. There is one exception—when running virtualized
workloads on a public cloud, using systems like VMware Cloud, you are responsible for
securing the hypervisor.
In private cloud systems, the hypervisor is always under your responsibility. Here are a few
ways to ensure your hypervisor is secure:
 Ensure machines running hypervisors are hardened, patched, isolated from public networks,
and physically secured in your data center
 Assign least privileges to local user accounts, carefully controlling access to the hypervisor
 Harden, secure, and closely monitor machines running the virtual machine monitor (VMM)
and virtualization management software, such as VMware vSphere
 Secure and monitor shared hardware caches and networks used by the hypervisor
 Pay special attention to hypervisors in development and testing environments—ensure
appropriate security measures are applied when a new hypervisor is deployed to production
Storage
In cloud systems, virtualization is used to abstract storage from hardware systems. Storage
systems become elastic pools of storage, or virtualized resources that can be provisioned and
scaled automatically.
Here are a few ways to secure your cloud storage services:
 Identify which devices or applications connect to cloud storage, which cloud storage services
are used throughout the organization, and map data flows.
 Block access to cloud storage for internal users who don’t need it, and eliminate shadow usage
of cloud services by end users.
 Classify data into sensitivity levels—a variety of automated tools are available. This can help
you focus on data stored in cloud storage that has security or compliance implications.
 Remove unused data—cloud storage can easily scale and it is common to retain unnecessary
data, or entire data volumes or snapshots that are no longer used. Identify this unused data and
eliminate it to reduce the attack surface and your compliance obligations.
 Carefully control access to data using identity and access management (IAM) systems, and
applying consistent security policies for cloud and on-premises systems.
 Use cloud data loss prevention (DLP) tools to detect and block suspicious data transfers, data
modification or deletion, or data access, whether malicious or accidental.
Databases
Databases in the cloud can easily be exposed to public networks, and almost always contain
sensitive data, making them an imminent security risk. Because databases are closely integrated
with the applications they serve and other cloud systems, those adjacent systems must also be
secured to prevent compromise of the database.
Here are a few ways to improve security of databases in the cloud:
 Hardening configuration and instances—if you deploy a database yourself in a compute
instance, it is your responsibility to harden the instance and securely configure the database. If
you use a managed database service, these concerns are typically handled by the cloud
provider.
 Database security policies—ensure database settings are in line with your organization’s
security and compliance policies. Map your security requirements and compliance obligations
to specific settings on cloud database systems. Use automated tools like CSPM to ensure secure
settings are applied to all database instances.
 Network access—as a general rule, databases should never be exposed to public networks and
should be isolated from unrelated infrastructure. If possible, a database should only accept
connections from the specific application instances it is intended to serve.
 Permissions—grant only the minimal level of permissions to users, applications and service
roles. Avoid “super users” and administrative users with blanket permissions. Each
administrator should have access to the specific databases they work on.
 End user device security—security is not confined to the cloud environment. You should be
aware what endpoint devices administrators are using to connect to your database. Those
devices should be secured, and you should disallow connections from unknown or untrusted
devices, and monitor sessions to detect suspicious activity.
Network
Here are a few ways you can secure cloud networks:
Cloud systems often connect to public networks, but also use virtual networks to enable
communication between components inside a cloud. All public cloud providers let you set up
a secure, virtual private network for your cloud resources ( called a VPC in Amazon and a
VNet in Azure).
 Use security groups to define rules that define what traffic can flow between cloud resources.
Keep in mind that security groups are tightly connected to compute instances, and compromise
of an instance grants access to the security group configuration, so additional security layers
are needed.
 Use Network Access Control Lists (ACL) to control access to virtual private networks. ACLs
provide both allow and deny rules, and provide stronger security controls than security groups.
 Use additional security solutions such as firewalls as a service (FWaaS) and web application
firewalls (WAF) to actively detect and block malicious traffic.
 Deploy Cloud Security Posture Management (CSPM) tools to automatically review cloud
networks, detect non-secure or vulnerable configurations and remediate them.
Kubernetes
When running Kubernetes on the cloud, it is almost impossible to separate the Kubernetes
cluster from other cloud computing layers. These include the application or code itself,
container images, compute instances, and network layers. Each layer is built on top of the
previous layer, and all layers must be protected for defense in depth.
The Kubernetes project recommends approaching security from four angles, known as the “4
Cs”:
 Code—ensuring code in containers is not malicious and uses secure coding practices
 Containers—scanning container images for vulnerabilities, and protecting containers at
runtime to ensure they are configured securely according to best practices
 Clusters—protecting Kubernetes master nodes and ensuring cluster configuration is in line
with security best practices
 Cloud—using cloud provider tools to secure the underlying infrastructure, including compute
instances and virtual private clouds (VPC)
Compliance with security best practices, industry standards and benchmarks, and internal
organizational strategies in a cloud-native environment also face challenges.
In addition to maintaining compliance, organizations must also provide evidence of
compliance. You need to adjust your strategy so that your Kubernetes environment fits the
controls originally created for your existing application architecture.
Learn more in our detailed guide to Kubernetes security best practices ›
Aqua Cloud Security Posture Management (CSPM)
Scan, monitor and remediate configuration issues in public cloud accounts according to best
practices and compliance standards, across AWS, Azure, Google Cloud, and Oracle
Cloud.CSPM
Eliminate misconfigurations in your public cloud accounts
Aqua CSPM provides automated, multi-cloud security posture management to scan, validate,
monitor, and remediate configuration issues in your public cloud accounts. Aqua CSPM
ensures the use of best practices and compliance standards across AWS, Azure, Google Cloud,
and Oracle Cloud — including Infrastructure-as-code templates.
Protect against:
 Servers exposed publicly to the internet
 Unencrypted data storage
 Lack of least-privilege policies
 Poor password policies or missing MFA
 Misconfigured backup/restore settings
Multi-cloud visibility – Gain visibility across all your cloud accounts
Aqua CSPM continuously audits your cloud accounts for security risks and misconfigurations
to assess your infrastructure risk and compliance posture. It provides checks across hundreds
of configuration settings and compliance best practices to ensure consistent, unified multi-
cloud security.
Rapid remediation – Find and fix misconfigurations before they’re exploited
Aqua provides self-securing capabilities to ensure your cloud accounts don’t drift out of
compliance. Get detailed, actionable advice and alerts, or choose automated remediation of
misconfigured services with granular control over chosen fixes.
Enterprise scale – Unify security across VMs, containers, and serverless
Protect applications in runtime on any cloud, orchestrator, or operating system using a zero-
trust model that provides granular control to accurately detect and stop attacks. Leverage
micro-services concepts to enforce immutability and micro-segmentation.
Infrastructure Security – The Network Level
Network Infrastructure Security, typically applied to enterprise IT environments, is a
process of protecting the underlying networking infrastructure by installing preventative
measures to deny unauthorized access, modification, deletion, and theft of resources and data.
These security measures can include access control, application security, firewalls, virtual
private networks (VPN), behavioral analytics, intrusion prevention systems, and wireless
security.
How does Network Infrastructure Security work?
Network Infrastructure Security requires a holistic approach of ongoing processes and practices
to ensure that the underlying infrastructure remains protected. The Cybersecurity and
Infrastructure Security Agency (CISA) recommends considering several approaches when
addressing what methods to implement.
 Segment and segregate networks and functions - Particular attention should be
paid to the overall infrastructure layout. Proper segmentation and segregation is an
effective security mechanism to limit potential intruder exploits from propagating
into other parts of the internal network. Using hardware such as routers can
separate networks creating boundaries that filter broadcast traffic. These micro-
segments can then further restrict traffic or even be shut down when attacks are
detected. Virtual separation is similar in design as physically separating a network
with routers but without the required hardware.
 Limit unnecessary lateral communications - Not to be overlooked is the peer-
to-peer communications within a network. Unfiltered communication between
peers could allow intruders to move about freely from computer to computer. This
affords attackers the opportunity to establish persistence in the target network by
embedding backdoors or installing applications.
 Harden network devices - Hardening network devices is a primary way to
enhance network infrastructure security. It is advised to adhere to industry
standards and best practices regarding network encryption, available services,
securing access, strong passwords, protecting routers, restricting physical access,
backing up configurations, and periodically testing security settings.
 Secure access to infrastructure devices - Administrative privileges are granted
to allow certain trusted users access to resources. To ensure the authenticity of the
users by implementing multi-factor authentication (MFA), managing privileged
access, and managing administrative credentials.
 Perform out-of-band (OoB) network management - OoB management
implements dedicated communications paths to manage network devices remotely.
This strengthens network security by separating user traffic from management
traffic.
 Validate integrity of hardware and software - Gray market products threaten IT
infrastructure by allowing a vector for attack into a network. Illegitimate products
can be pre-loaded with malicious software waiting to be introduced into an
unsuspecting network. Organizations should regularly perform integrity checks on
their devices and software.
Why is Network Infrastructure Security important?
The greatest threat of network infrastructure security is from hackers and malicious
applications that attack and attempt to gain control over the routing infrastructure. Network
infrastructure components include all the devices needed for network communications,
including routers, firewalls, switches, servers, load-balancers, intrusion detection systems
(IDS), domain name system (DNS), and storage systems. Each of these systems presents an
entry point to hackers who want to place malicious software on target networks.
 Gateway Risk: Hackers who gain access to a gateway router can monitor, modify,
and deny traffic in and out of the network.
 Infiltration Risk: Gaining more control from the internal routing and switching
devices, a hacker can monitor, modify, and deny traffic between key hosts inside
the network and exploit the trusted relationships between internal hosts to move
laterally to other hosts.
Although there are any number of damaging attacks that hackers can inflict on a network,
securing and defending the routing infrastructure should be of primary importance in
preventing deep system infiltration.
What are the benefits of Network Infrastructure Security?
Network infrastructure security, when implemented well, provides several key benefits to a
business’s network.
 Improved resource sharing saves on costs: Due to protection, resources on the
network can be utilized by multiple users without threat, ultimately reducing the
cost of operations.
 Shared site licenses: Security ensures that site licenses would be cheaper than
licensing every machine.
 File sharing improves productivity: Users can securely share files across the
internal network.
 Internal communications are secure: Internal email and chat systems will be
protected from prying eyes.
 Compartmentalization and secure files: User files and data are now protected
from each other, compared with using machines that multiple users share.
 Data protection: Data back-up to local servers is simple and secure, protecting
vital intellectual property.
What are the different types of Network Infrastructure Security?
A variety of approaches to network infrastructure security exist, it is best to adhere to multiple
approaches to broaden network defense.
 Access Control: The prevention of unauthorized users and devices from accessing
the network.
 Application Security: Security measures placed on hardware and software to lock
down potential vulnerabilities.
 Firewalls: Gatekeeping devices that can allow or prevent specific traffic from
entering or leaving the network.
 Virtual Private Networks (VPN): VPNs encrypt connections between endpoints
creating a secure “tunnel” of communications over the internet.
 Behavioral Analytics: These tools automatically detect network activity that
deviates from usual activities.
 Wireless Security: Wireless networks are less secure than hardwired networks,
and with the proliferation of new mobile devices and apps, there are ever-
increasing vectors for network infiltration.
Infrastructure Security – The Host Level
When reviewing host security and assessing risks, the context of cloud services delivery models
(SaaS, PaaS, and IaaS) and deployment models public, private, and hybrid) should be considered
[7]. The host security responsibilities in SaaS and PaaS services are transferred to the provider
of cloud services. IaaS customers are primarily responsible for securing the hosts provisioned in
the cloud (virtualization software security, customer guest OS or virtual server security).
Infrastructure Security – The Application Level
Application or software security should be a critical element of a security program. Most
enterprises with information security programs have yet to institute an application security
program to address this realm. Designing and implementing applications aims at deployment on
a cloud platform will require existing application security programs to re-evaluate current
practices and standards. The application security spectrum ranges from standalone single-user
applications to sophisticated multiuser e-commerce applications used by many users. The level
is responsible for managing [7], [9], [10]:
• Application-level security threats;
• End user security;
• SaaS application security;
• PaaS application security;
• Customer-deployed application security
• IaaS application security
• Public cloud security limitations
It can be summarized that the issues of infrastructure security and cloud computing lie in the
area of definition and provision of security specified aspects each party delivers.
Application level security refers to those security services that are invoked at the interface
between an application and a queue manager to which it is connected.
These services are invoked when the application issues MQI calls to the queue manager. The
services might be invoked, directly or indirectly, by the application, the queue manager, another
product that supports IBM® MQ, or a combination of any of these working together.
Application level security is illustrated in Figure 1.
Application level security is also known as end-to-end security or message level security.
Here are some examples of application level security services:
 When an application puts a message on a queue, the message descriptor contains a
user ID associated with the application. However, there is no data present, such as an
encrypted password, that can be used to authenticate the user ID. A security service
can add this data. When the message is eventually retrieved by the receiving
application, another component of the service can authenticate the user ID using the
data that has travelled with the message. This is an example of an identification and
authentication service.
 A message can be encrypted when it is put on a queue by an application and decrypted
when it is retrieved by the receiving application. This is an example of a
confidentiality service.
 A message can be checked when it is retrieved by the receiving application. This
check determines whether its contents have been deliberately modified since it was
first put on a queue by the sending application. This is an example of a data integrity
service.
 Planning for Advanced Message Security
Advanced Message Security ( AMS) is a component of IBM MQ that provides a high
level of protection for sensitive data flowing through the IBM MQ network, while not
impacting the end applications.
 Providing your own application level security
You can provide your own application level security services. To help you implement
application level security, IBM MQ provides two exits, the API exit and the API-
crossing exit.
Azure Firewall
Azure Firewall is a managed, cloud-based network security service that protects your Azure
Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability
and unrestricted cloud scalability.
You can centrally create, enforce, and log application and network connectivity policies across
subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual
network resources allowing outside firewalls to identify traffic originating from your virtual
network. The service is fully integrated with Azure Monitor for logging and analytics.
According to the Shared Responsibilities for Cloud Computing, while Microsoft is responsible
for maintaining the security of the infrastructure on which their cloud runs, users are also
responsible for the resources that they use on the cloud. Users are, thus, required to make use
of services that ensure the security of the resources on the cloud.
There are measures to tackle security challenges in the cloud as well, just like the firewall in
your Windows PC that you might have encountered, on multiple occasions, warning you about
blocking certain applications, deemed a threat, from accessing the network.
Azure Firewall is one such network security service from Microsoft Azure that monitors and
takes action for unwanted network activities on the cloud.
Since Azure Firewall is a cloud-based service, it has the capabilities to be highly available and
scaled-up as and when required. Azure Firewall is also integrated with Azure Monitor so that
the latter’s abilities in logging and analytics can be used for maintaining strict security.
Azure Firewall gives a unified solution to create and enforce policies for secure network
connection across services and subscriptions in Azure.
There is also an Azure Web Application Firewall that is specific to Application Gateway in
Azure. While the Azure Firewall looks over the whole cloud against exploitations, the Azure
Web Application Firewall works specifically to protect the web apps against vulnerabilities.
Check out this Azure Certification Course to learn more about Azure curated by Industry
experts!
Features of Azure Firewall
The features of Azure Firewall that make it stand out are:
 High availability: No extra configuration or additional services are required for Azure
Firewall. It has very high uptime and is fully managed.
 Availability zones: A firewall can be made available across multiple availability zones
or it can be restricted to particular zones based on your requirements. There is no
additional charge for this, however, the data transfer rates can change depending on the
zones.
 Scalability: The firewall can be scaled for adjusting to the varying network
requirements.
 Traffic filtering rules: Rules can be specified based on IP address, ports, etc., for
allowing or preventing connections. Azure Firewall can distinguish among packets
from different connections and enforce the rules to allow or deny them.
 FQDN tags: Fully qualified domain name (FQDN) tags can be given to trusted sources
that need to be allowed through the firewall. Rules can be created based on this, which
will filter traffic from qualified domains to pass through.
 Service tags: These are labels that indicate a range of IP addresses for Azure Key
Vault, Container Registry, and other services. These are Microsoft-managed and cannot
be changed. The firewall allows filtering rules based on these.
 Threat intelligence: Microsoft has a maintained threat intelligence field that lists
sources and domains deemed as malicious. Azure Firewall can filter connections to
deny them or alert the users based on this.
 Multiple public IP addresses: Multiple IP addresses, up to 250, can be added to Azure
Firewall. This enables the features of DNAT and SNAT in your firewall.
 Azure Monitor logging: Azure Firewall is tightly integrated with Azure Monitor.
Hence, all events are logged and these logs can be archived to storage accounts or
streamed to event hubs, etc.
 Web categories: The administrators can allow or deny access to certain websites based
on the category to which they belong. This can be social media websites, gaming
websites, and others.
 Certifications: Payment card industry (PCI,) service organization controls (SOC,)
International Organization for Standardization (ISO,) and ICSA Labs certifications are
all available for Azure Firewall.
If you want to learn Azure concepts, please refer to our blog on Azure Tutorial!
Azure Firewall vs NSG
First of all, you need to know what an NSG is. NSG stands for network security group; it can
be used in filtering network traffic in the Azure cloud. NSG contains rules based on IP
addresses, ports, etc., which can allow or deny connections to and from Azure Resources.
Azure Firewall and NSG seem pretty similar; so, let us compare them side by side.
Features Azure Firewall NSG
Rule-based filtering Firewall supports rule-
based filtering
NSG also supports rule-based
filtering
FQDN tags Firewall supports FQDN
tags
NSG does not support FQDN
tags
Service tags Firewall supports service
tags
NSG also supports service
tags
Threat-intelligence-based filtering Firewall supports threat-
intelligence-based filtering
NSG does not support threat-
intelligence-based filtering
Destination and source network
address translation (DNAT and
SNAT)
Firewall supports DNAT
and SNAT
NSG does not support DNAT
and SNAT
Azure Monitor integration The firewall is well-
integrated with Azure
Monitor
NSG also has Azure Monitor
integration
From the comparison, it can be inferred that NSG lacks some features that Firewall has, and
this makes Azure Firewall a more robust solution for cloud security.
Even though NSG lacks a few features, Azure Firewall and NSG are not mutually exclusive,
but they can complement each other in providing the best protection for your Azure cloud
resources.
Azure Firewall Limitations
Even though Azure Firewall is a rich and robust feature, it still has some limitations. The
limitations are:
 Although it supports threat-intelligence-based filtering, Azure Firewall does not have
IPS support, which many organizations require.
 Azure Firewall uses public DNS servers to look up domains, and it cannot be configured
to use internal DNS servers.
 Azure Firewall can also be costly for some businesses.
Load Balancer
Cloud Load balancing is the process of distributing workloads and computing resources
across one or more servers. This kind of distribution ensures maximum throughput in minimum
response time. The workload is segregated among two or more servers, hard drives, network
interfaces or other computing resources, enabling better resource utilization and system
response time. Thus, for a high traffic website, effective use of cloud load balancing can ensure
business continuity. The common objectives of using load balancers are:
 To maintain system firmness.
 To improve system performance.
 To protect against system failures.
Cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google offer
cloud load balancing to facilitate easy distribution of workloads. For ex: AWS offers Elastic
Load balancing (ELB) technology to distribute traffic among EC2 instances. Most of the
AWS powered applications have ELBs installed as key architectural component.
Similarly, Azure’s Traffic Manager allocates its cloud servers’ traffic across multiple data
centres.
How does load balancing work?
Here, load refers to not only the website traffic but also includes CPU load, network load and
memory capacity of each server. A load balancing technique makes sure that each system in
the network has same amount of work at any instant of time. This means neither any of them
is excessively over-loaded, nor under-utilized.
The load balancer distributes data depending upon how busy each server or node is. In the
absence of a load balancer, the client must wait while his process gets processed, which might
be too tiring and demotivating for him.
Various information like jobs waiting in queue, CPU processing rate, job arrival rate etc. are
exchanged between the processors during the load balancing process. Failure in the right
application of load balancers can lead to serious consequences, data getting lost being one of
them.
Different companies may use different load balancers and multiple load balancing algorithms
like static and dynamic load balancing. One of the most commonly used methods is Round-
robin load balancing.
It forwards client request to each connected server in turn. On reaching the end, the load
balancer loops back and repeats the list again. The major benefit is its ease of implementation.
The load balancers check the system heartbeats during set time intervals to verify whether each
node is performing well or not.
Different Types of Load Balancing Algorithms in Cloud Computing:
1. Static Algorithm
Static algorithms are built for systems with very little variation in load. The entire traffic is
divided equally between the servers in the static algorithm. This algorithm requires in-depth
knowledge of server resources for better performance of the processor, which is determined at
the beginning of the implementation.
However, the decision of load shifting does not depend on the current state of the system. One
of the major drawbacks of static load balancing algorithm is that load balancing tasks work
only after they have been created. It could not be implemented on other devices for load
balancing.
2. Dynamic Algorithm
The dynamic algorithm first finds the lightest server in the entire network and gives it priority
for load balancing. This requires real-time communication with the network which can help
increase the system's traffic. Here, the current state of the system is used to control the load.
The characteristic of dynamic algorithms is to make load transfer decisions in the current
system state. In this system, processes can move from a highly used machine to an underutilized
machine in real time.
3. Round Robin Algorithm
As the name suggests, round robin load balancing algorithm uses round-robin method to assign
jobs. First, it randomly selects the first node and assigns tasks to other nodes in a round-robin
manner. This is one of the easiest methods of load balancing.
Processors assign each process circularly without defining any priority. It gives fast response
in case of uniform workload distribution among the processes. All processes have different
loading times. Therefore, some nodes may be heavily loaded, while others may remain under-
utilised.
4. Weighted Round Robin Load Balancing Algorithm
Weighted Round Robin Load Balancing Algorithms have been developed to enhance the most
challenging issues of Round Robin Algorithms. In this algorithm, there are a specified set of
weights and functions, which are distributed according to the weight values.
Processors that have a higher capacity are given a higher value. Therefore, the highest loaded
servers will get more tasks. When the full load level is reached, the servers will receive stable
traffic.
5. Opportunistic Load Balancing Algorithm
The opportunistic load balancing algorithm allows each node to be busy. It never considers the
current workload of each system. Regardless of the current workload on each node, OLB
distributes all unfinished tasks to these nodes.
The processing task will be executed slowly as an OLB, and it does not count the
implementation time of the node, which causes some bottlenecks even when some nodes are
free.
6. Minimum To Minimum Load Balancing Algorithm
Under minimum to minimum load balancing algorithms, first of all, those tasks take minimum
time to complete. Among them, the minimum value is selected among all the functions.
According to that minimum time, the work on the machine is scheduled.
Other tasks are updated on the machine, and the task is removed from that list. This process
will continue till the final assignment is given. This algorithm works best where many small
tasks outweigh large tasks.
Load balancing solutions can be categorized into two types -
o Software-based load balancers: Software-based load balancers run on standard
hardware (desktop, PC) and standard operating systems.
o Hardware-based load balancers: Hardware-based load balancers are dedicated boxes
that contain application-specific integrated circuits (ASICs) optimized for a particular
use. ASICs allow network traffic to be promoted at high speeds and are often used for
transport-level load balancing because hardware-based load balancing is faster than a
software solution.
Major Examples of Load Balancers -
o Direct Routing Request Despatch Technique: This method of request dispatch is
similar to that implemented in IBM's NetDispatcher. A real server and load balancer
share a virtual IP address. The load balancer takes an interface built with a virtual IP
address that accepts request packets and routes the packets directly to the selected
server.
o Dispatcher-Based Load Balancing Cluster: A dispatcher performs smart load
balancing using server availability, workload, capacity and other user-defined
parameters to regulate where TCP/IP requests are sent. The dispatcher module of a load
balancer can split HTTP requests among different nodes in a cluster. The dispatcher
divides the load among multiple servers in a cluster, so services from different nodes
act like a virtual service on only one IP address; Consumers interconnect as if it were a
single server, without knowledge of the back-end infrastructure.
o Linux Virtual Load Balancer: This is an open-source enhanced load balancing
solution used to build highly scalable and highly available network services such as
HTTP, POP3, FTP, SMTP, media and caching, and Voice over Internet Protocol (VoIP)
is done. It is a simple and powerful product designed for load balancing and fail-over.
The load balancer itself is the primary entry point to the server cluster system. It can
execute Internet Protocol Virtual Server (IPVS), which implements transport-layer load
balancing in the Linux kernel, also known as layer-4 switching.
Types of Load Balancing
You will need to understand the different types of load balancing for your network. Server load
balancing is for relational databases, global server load balancing is for troubleshooting in
different geographic locations, and DNS load balancing ensures domain name functionality.
Load balancing can also be based on cloud-based balancers.
Network Load Balancing
Cloud load balancing takes advantage of network layer information and leaves it to decide
where network traffic should be sent. This is accomplished through Layer 4 load balancing,
which handles TCP/UDP traffic. It is the fastest local balancing solution, but it cannot balance
the traffic distribution across servers.
HTTP(S) load balancing
HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7. This
means that load balancing operates in the layer of operations. It is the most flexible type of load
balancing because it lets you make delivery decisions based on information retrieved from
HTTP addresses.
Internal Load Balancing
It is very similar to network load balancing, but is leveraged to balance the infrastructure
internally.
Load balancers can be further divided into hardware, software and virtual load balancers.
Hardware Load Balancer
It depends on the base and the physical hardware that distributes the network and application
traffic. The device can handle a large traffic volume, but these come with a hefty price tag and
have limited flexibility.
Software Load Balancer
It can be an open source or commercial form and must be installed before it can be used. These
are more economical than hardware solutions.
Virtual Load Balancer
It differs from a software load balancer in that it deploys the software to the hardware load-
balancing device on the virtual machine.
advantages of Cloud Load Balancing
a) High Performing applications
Cloud load balancing techniques, unlike their traditional on-premise counterparts, are less
expensive and simple to implement. Enterprises can make their client applications work faster
and deliver better performances, that too at potentially lower costs.
b) Increased scalability
Cloud balancing takes help of cloud’s scalability and agility to maintain website traffic. By
using efficient load balancers, you can easily match up the increased user traffic and distribute
it among various servers or network devices. It is especially important for ecommerce websites,
who deals with thousands of website visitors every second. During sale or other promotional
offers they need such effective load balancers to distribute workloads.
c) Ability to handle sudden traffic spikes
A normally running University site can completely go down during any result declaration. This
is because too many requests can arrive at the same time. If they are using cloud load balancers,
they do not need to worry about such traffic surges. No matter how large the request is, it can
be wisely distributed among different servers for generating maximum results in less response
time.
d) Business continuity with complete flexibility
The basic objective of using a load balancer is to save or protect a website from sudden outages.
When the workload is distributed among various servers or network units, even if one node
fails the burden can be shifted to another active node.
Thus, with increased redundancy, scalability and other features load balancing easily handles
website or application traffic.
Traffic Manager
Microsoft Azure Traffic Manager enables customers to control the user traffic distribution of
multiple service endpoints situated in data centers across the world. Cloud services, Web Apps,
and Azure VMs are among the service endpoints supported by Azure Traffic Manager.
Users can utilize both the Azure Traffic Manager and non-Azure external endpoints. By using
the traffic-routing mechanism, It uses the DNS (Domain Name System) to send client requests
to the most appropriate endpoint.
Why do we use Azure Traffic Manager?
Depending on the selected routing method, Azure traffic management selects an
endpoint.
 To fulfill the needs of varied applications, it provides a wide range of traffic-routing
techniques.
 After choosing an endpoint, the client is immediately connected to the appropriate
service point.
 Additionally included are automatic failover and endpoint health checks.
 Additionally, it enables you to build extremely robust applications that can continue to
function even if the entire Azure region goes down.
Features of the Azure Traffic Manager
It is used to deliver network traffic load balancing and management services in cloud-based
systems. Azure Traffic Manager is mostly used in;
 Ensure Availability and Reduce Downtime – Traffic Manager supports automated
failover for Azure Cloud Services, Azure Websites, and other defined endpoints.
 Upgrade / Maintain Endpoints Without Downtime – Traffic Manager allows
endpoints to be automatically paused when there is no activity, allowing developers and
IT administrators to upgrade and test the endpoint without downtime.
 Combination of hybrid applications- When used with hybrid cloud as well as on
installations like “migrate-to-cloud,””burst-to-cloud,” and “failover-to-cloud,”
Microsoft Azure Traffic Manager can now work with non-Azure destinations.
 Distribute Traffic – Traffic may be dispersed over many data centers or Azure
destinations using nested profiles.
 Increase the availability of applications- By keeping track of your endpoints and
offering automated failover when one fails, it guarantees the availability of your critical
programs.
 Enhance application performance- Running cloud services or websites in data
centers all around the world is made possible by Azure. Traffic is directed to the
endpoint with the smallest client propagation delay, which enhances application
performance.
Routing Methods of the Azure Traffic Manager
The traffic is distributed by Azure Traffic Manager depending on one of six traffic-routing
mechanisms that decide which destination is returned in the DNS response.
There are the following traffic routing strategies available:
 Priority
 The primary service endpoint has the top priority with all traffic when you select the
priority routing strategy, which displays a prioritized list of service endpoints.
 Traffic is forwarded to the endpoint with the next greatest priority if the primary service
endpoint is unavailable.
 Weighted
 Weighted routing is used when you want to evenly distribute traffic or apply pre-
determined weights to a group of destinations.
 This traffic-routing method involves giving each endpoint a weight, which is a number
between 200 and 2000, in the Microsoft Azure Traffic Manager profile option.
 Performance
 By sending traffic to the location closest to the user, this traffic routing technology helps
various apps respond more quickly.
 The ‘nearest’ endpoint isn’t usually the one that is physically closest.
 On the other hand, the “Performance” traffic-routing strategy chooses the nearest
destination by analysing network latency.
 Multivalue
 You can select MultiValue if your Azure Traffic Manager profiles only include IPv2 or
IPv4 addresses as destinations.
 All appropriate endpoints are retrieved when a request for this profile is made.
 Geographic
 In geographic routing, a set of geographic areas must be assigned to each endpoint
associated with that profile.
 Any requests from such locations are only directed to that endpoint when a region or
group of regions is assigned to it.
 Subnet
 Use the Subnet traffic-routing method to associate groups of end-user IP address ranges
with a particular endpoint inside an Azure Traffic Manager profile.
 A request is received, and the endpoint that responds is the one that corresponds to the
request’s originating IP address.
Benefits of Azure Traffic Manager
 Enhances the usability of critical applications.
 Endpoint monitoring and automated failover provide extraordinarily high application
availability in the event that any endpoint fails.
 Improves responsiveness for high-performance applications.
 The Traffic Manager distributes traffic according to the optimum traffic-routing
algorithm for the circumstance.
 Endpoint health is constantly monitored.
 Automatic failover in the event that the endpoints fail.
Network Security Groups and Application Security Groups
Network Security Group
Azure network security group (NSG) is used to filter network traffic between Azure resources
in an Azure virtual network. We can create security rules to allow or deny inbound network
traffic to, or outbound network traffic from, several types of Azure resources. For each rule,
you can specify source and destination, port, and protocol.
Security rules are evaluated and applied based on the five-tuple (source, source port,
destination, destination port, and protocol) information. You can’t create two security rules
with the same priority and direction. A flow record is created for existing connections.
Communication is allowed or denied based on the connection state of the flow record. The flow
record allows a network security group to be stateful.
An Azure network security group is nothing more than a set of access control rules that may
be used to secure a subnet or a virtual network; these rules examine incoming and outgoing
traffic to determine whether to accept or reject a package.
The VM-level Network security group and the subnet-level Network security group are the two
levels that make up Azure network security.
 Microsoft’s completely managed solution, Azure Network Security Groups, helps to
sift traffic to and from Azure VNet.
 Any number of security rules that make up the Azure NSG can be enabled or disabled
by users.
 A five-tuple hash is used to assess these rules’ effectiveness.
 The 5-tuple hash uses the destination IP address and port number, source port number,
IP addresses, as well as other factors.
 You can quickly link Network Security Groups with a VNet or VM network interface
thanks to its OSI layer 3 and layer 4 functionality.
Reference the above picture, along with the following text, to understand how Azure processes
inbound and outbound rules for network security groups:
Inbound traffic
For inbound traffic, Azure processes the rules in a network security group associated to a subnet
first, if there’s one, and then the rules in a network security group associated to the network
interface, if there’s one. This includes intra-subnet traffic as well.
 VM1: The security rules in NSG1 are processed, since it’s associated
to Subnet1 and VM1 is in Subnet1. Unless you’ve created a rule that allows port 80
inbound, the traffic is denied by the DenyAllInbound default security rule, and never
evaluated by NSG2, since NSG2 is associated to the network interface. If NSG1 has a
security rule that allows port 80, the traffic is then processed by NSG2. To allow port 80 to
the virtual machine, both NSG1 and NSG2 must have a rule that allows port 80 from the
internet.
 VM2: The rules in NSG1 are processed because VM2 is also in Subnet1.
Since VM2 doesn’t have a network security group associated to its network interface, it
receives all traffic allowed through NSG1 or is denied all traffic denied by NSG1. Traffic
is either allowed or denied to all resources in the same subnet when a network security
group is associated to a subnet.
 VM3: Since there’s no network security group associated to Subnet2, traffic is allowed into
the subnet and processed by NSG2, because NSG2 is associated to the network interface
attached to VM3.
 VM4: Traffic is allowed to VM4, because a network security group isn’t associated
to Subnet3, or the network interface in the virtual machine. All network traffic is allowed
through a subnet and network interface if they don’t have a network security group
associated to them.
Outbound traffic
For outbound traffic, Azure processes the rules in a network security group associated to a
network interface first, if there’s one, and then the rules in a network security group associated
to the subnet, if there’s one. This includes intra-subnet traffic as well.
 VM1: The security rules in NSG2 are processed. Unless you create a security rule that
denies port 80 outbound to the internet, the traffic is allowed by
the AllowInternetOutbound default security rule in both NSG1 and NSG2. If NSG2 has a
security rule that denies port 80, the traffic is denied, and never evaluated by NSG1. To
deny port 80 from the virtual machine, either, or both of the network security groups must
have a rule that denies port 80 to the internet.
 VM2: All traffic is sent through the network interface to the subnet, since the network
interface attached to VM2 doesn’t have a network security group associated to it. The rules
in NSG1 are processed.
 VM3: If NSG2 has a security rule that denies port 80, the traffic is denied. If NSG2 has a
security rule that allows port 80, then port 80 is allowed outbound to the internet, since a
network security group isn’t associated to Subnet2.
 VM4: All network traffic is allowed from VM4, because a network security group isn’t
associated to the network interface attached to the virtual machine, or to Subnet3.
Azure Network Security Group-How it works?
 A great choice for safeguarding virtual networks is Microsoft’s Azure Network Security
Group (NSG).
 Using this application, network administrators may quickly organize, filter, direct, and
regulate different network traffic flows.
 When building Azure NSG, you can configure various incoming and outgoing rules to
permit or disallow particular types of traffic.
 If you want to use Azure Network Security Groups, you should build and configure
individual rules.
 Multiple Azure services’ resources may be included in an Azure virtual network.
 The full list is available under Services that may be put into a virtual network.
 There can be zero or one network security group configured to each virtual network
subnet and network interface in a virtual machine.
 As many subnets and network interfaces as you like can be connected to the same
network security group.
Depending on the circumstances, you can establish whatever rules you like, such as whether
the traffic traversing the network is secure and ought to be permitted.
Azure Network Security Group Rules
 Allow Vnet InBound – This rule allows all hosts within the virtual network (including
subnets) to communicate without being blocked.
 Allow Azure LoadBalancer InBound – This rule permits an Azure load balancer to
communicate with your virtual machine and send heartbeats.
 Deny All InBound – This is the deny-all rule, which by default blocks all inbound
traffic to the VM and protects it from malicious access outside the Azure Vnet.
Application Security Group
Normally when you deploy a network security group (NSG) it is either assigned to a NIC or a
subnet (preferred). If you deploy that NSG to a subnet then the rules apply to all of the NICs, or
virtual machines, in that subnet. This is OK when you’re deploying a new system where you
can easily place virtual machines into subnets, and treat each subnet as its own security zone.
But in the real world, things aren’t always that clean, and you might need something that allows
a more dynamic or flexible means of assigning rules to some machines in a subnet.
ASGs are used within a NSG to apply a network security rule to a specific workload or group
of VMs — defined by ASG worked as being the “network object” & explicit IP addresses are
added to this object. This provides the capability to group VMs into associated groups or
workloads, simplifying the NSG rule definition process. Another great use of this is for
scalability, creating the virtual machine and assigning the newly created virtual machine to its
ASG will provide it with all the NSG rules in place for that specific ASG — zero distribution
to your service!
ASG Key Points
 Azure Security Groups allow us to define fine-grained network security policies based on
workloads, centralized on applications, instead of explicit IP addresses.
 ASGs provide the capability of grouping the VMs with monikers and secure our
applications by filtering traffic.
 By implementing granular security traffic controls, we can improve isolation of workloads
and can protect them individually.
 If a breach occurs, this method limits the potential impact of lateral exploration of our
networks from hackers.
 The security definition is simplified when using the ASGs.
 We can define application groups by providing a moniker descriptive name that fits our
architecture.
 We can use it the way we want i.e. for applications, systems, environments, workload types,
tiers or even any kind of roles.
 We can define a single collection of rules using ASGs and NSGs. We just have to apply a
single NSG to our entire virtual network on all subnets.
 This way by defining a single NSG gives us the full visibility on all traffic policies and a
single place for management. Hence, it reduces the tedious job.
Benefits of using ASGs:
 We can scale at our own pace. While deploying the VMs, we can make them members of
the appropriate ASGs.
 If the VM is running more than one workloads, we can simply assign multiple ASGs.
 The access is always granted based on workloads.
 We don’t have to worry about security definition ever again.
 The most important point to be noted is that we can implement a zero-trust model. Meaning,
we can limit access to the application flows that are explicitly permitted.
 ASGs introduce the ability to deploy multiple applications within the same subnet and also
isolate traffic based on ASGs.
 With the use of Azure Security Groups, you can reduce the number of Network Security
Groups in our subscription.
 In some cases, it gets so helpful that you can use a single NSG for multiple subnets of your
virtual network.
Associate Virtual Machines
An application security group is a logical collection of virtual machines (NICs). You join virtual
machines to the application security group, and then use the application security group as a
source or destination in NSG rules.
The Networking blade of virtual machine properties has a new button called Configure The
Application Security Groups for each NIC in the virtual machine. If you click this button, a pop-
up blade will appear and you can select which (none, one, many) application security groups
that this NIC should join, and then click Save to commit the change.
A Virtual Machine can be attached to more than one Application Security Group. This helps in
cases of multi-application servers.
The following requirements apply to the creation and use of ASGs:
 All network interfaces used in an ASG must be within the same VNet
 If ASGs are used in the source and destination, they must be within the same VNet
Creating NSG Rules
You now can open an NSG and create inbound or outbound rules that use the application
security group as a source or destination, and thus uses the associated virtual machine NICs as
sources and destinations. Source and Destination in the new rule blade allow you to select any
application security group in the same region.
As virtual machines are added, removed or updated the management overhead that is required
to maintain the NSG may become quite considerable. This is where ASGs come in to play to
simplify the NSG rule creation, and continued maintenance of the rule. Instead of defining IP
prefixes, you create an ASG and use the it within the NSG rule. The Azure platform takes care
of the rest by determining the IPs that are covered within the ASG.
As network interfaces of VMs are added to the ASG, the effective network security rules are
applied without the need to update the NSG rule itself.
UNIT -V.docx

More Related Content

PDF
Module 5-cloud computing-SECURITY IN THE CLOUD
DOC
Security Issues in Cloud Computing by rahul abhishek
PDF
Security Issues in Cloud Computing by rahul abhishek
DOCX
Fog computing document
PDF
Cloud Security Challenges, Types, and Best Practises.pdf
DOCX
Cloud computing seminar report
DOCX
Fog doc
DOCX
fog computing provide security to the data in cloud
Module 5-cloud computing-SECURITY IN THE CLOUD
Security Issues in Cloud Computing by rahul abhishek
Security Issues in Cloud Computing by rahul abhishek
Fog computing document
Cloud Security Challenges, Types, and Best Practises.pdf
Cloud computing seminar report
Fog doc
fog computing provide security to the data in cloud

Similar to UNIT -V.docx (20)

PDF
Cloud Security Network – Definition and Best Practices.pdf
PDF
Encryption Technique for a Trusted Cloud Computing Environment
PDF
H017155360
PDF
Encryption Technique for a Trusted Cloud Computing Environment
PDF
Encryption Technique for a Trusted Cloud Computing Environment
PPT
Cloud Computing Security Challenges
PDF
Literature Review: Security on cloud computing
PPTX
Cloud security
PDF
Building a Resilient Cloud Security Architecture: Types, Challenges, and Best...
PDF
1784 1788
PDF
1784 1788
PDF
the_role_of_resilience_data_in_ensuring_cloud_security.pdf
PDF
Cloud Computing Using Encryption and Intrusion Detection
PDF
Cloud Security - Types, Common Threats & Tips To Mitigate.pdf
PDF
CLOUD COMPUTING.pdf
PDF
CLOUD COMPUTING.pdf
PDF
Cloud Security
PPTX
the_role_of_resilience_data_in_ensuring_cloud_security.pptx
PDF
SECURING THE CLOUD DATA LAKES
PDF
cloud1_aggy.pdf
Cloud Security Network – Definition and Best Practices.pdf
Encryption Technique for a Trusted Cloud Computing Environment
H017155360
Encryption Technique for a Trusted Cloud Computing Environment
Encryption Technique for a Trusted Cloud Computing Environment
Cloud Computing Security Challenges
Literature Review: Security on cloud computing
Cloud security
Building a Resilient Cloud Security Architecture: Types, Challenges, and Best...
1784 1788
1784 1788
the_role_of_resilience_data_in_ensuring_cloud_security.pdf
Cloud Computing Using Encryption and Intrusion Detection
Cloud Security - Types, Common Threats & Tips To Mitigate.pdf
CLOUD COMPUTING.pdf
CLOUD COMPUTING.pdf
Cloud Security
the_role_of_resilience_data_in_ensuring_cloud_security.pptx
SECURING THE CLOUD DATA LAKES
cloud1_aggy.pdf
Ad

More from Revathiparamanathan (20)

DOCX
UNIT 1 NOTES.docx
DOCX
Unit 3,4.docx
DOCX
UNIT II.docx
DOCX
UNIT V.docx
DOCX
COMPILER DESIGN.docx
DOCX
UNIT -III.docx
DOCX
UNIT -IV.docx
DOCX
UNIT - II.docx
DOCX
UNIT - I.docx
PPTX
CC -Unit3.pptx
PPTX
CC -Unit4.pptx
PPTX
PDF
Unit 4 notes.pdf
PDF
Unit 3 notes.pdf
PDF
Unit 1 notes.pdf
PDF
Unit 2 notes.pdf
PDF
Unit 5 notes.pdf
PPTX
PPTX
Unit-4 Day1.pptx
PPTX
Scala Introduction.pptx
UNIT 1 NOTES.docx
Unit 3,4.docx
UNIT II.docx
UNIT V.docx
COMPILER DESIGN.docx
UNIT -III.docx
UNIT -IV.docx
UNIT - II.docx
UNIT - I.docx
CC -Unit3.pptx
CC -Unit4.pptx
Unit 4 notes.pdf
Unit 3 notes.pdf
Unit 1 notes.pdf
Unit 2 notes.pdf
Unit 5 notes.pdf
Unit-4 Day1.pptx
Scala Introduction.pptx
Ad

Recently uploaded (20)

PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPT
Mechanical Engineering MATERIALS Selection
PDF
737-MAX_SRG.pdf student reference guides
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Safety Seminar civil to be ensured for safe working.
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Artificial Intelligence
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Mechanical Engineering MATERIALS Selection
737-MAX_SRG.pdf student reference guides
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Safety Seminar civil to be ensured for safe working.
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Embodied AI: Ushering in the Next Era of Intelligent Systems
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Artificial Intelligence
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF

UNIT -V.docx

  • 1. CLOUD SECURITY Security Overview – Cloud Security Challenges – Data Security –Application Security – Virtual Machine Security - Cloud Infrastructure security: network, host and application level – Azure Firewall-Load Balancer- Traffic Manager- Network Security Groups and Application Security Groups Security Overview Security in cloud computing is a major concern. Data in cloud should be stored in encrypted form. To restrict client from accessing the shared data directly, proxy and brokerage services should be employed. Security Planning Before deploying a particular resource to cloud, one should need to analyse several aspects of the resource such as:  Select resource that needs to move to the cloud and analyse its sensitivity to risk.  Consider cloud service models such as IaaS, PaaS, and SaaS. These models require customer to be responsible for security at different levels of service.  Consider the cloud type to be used such as public, private, community or hybrid.  Understand the cloud service provider's system about data storage and its transfer into and out of the cloud. The risk in cloud deployment mainly depends upon the service models and cloud types. Understanding Security of Cloud Security Boundaries A particular service model defines the boundary between the responsibilities of service provider and customer. Cloud Security Alliance (CSA) stack model defines the boundaries between each service model and shows how different functional units relate to each other. The following diagram shows the CSA stack model:
  • 2. Key Points to CSA Model  IaaS is the most basic level of service with PaaS and SaaS next two above levels of services.  Moving upwards, each of the service inherits capabilities and security concerns of the model beneath.  IaaS provides the infrastructure, PaaS provides platform development environment, and SaaS provides operating environment.  IaaS has the least level of integrated functionalities and integrated security while SaaS has the most.  This model describes the security boundaries at which cloud service provider's responsibilities end and the customer's responsibilities begin.  Any security mechanism below the security boundary must be built into the system and should be maintained by the customer. Although each service model has security mechanism, the security needs also depend upon where these services are located, in private, public, hybrid or community cloud. Understanding Data Security Since all the data is transferred using Internet, data security is of major concern in the cloud. Here are key mechanisms for protecting data.  Access Control  Auditing  Authentication
  • 3.  Authorization All of the service models should incorporate security mechanism operating in all above- mentioned areas. Isolated Access to Data Since data stored in cloud can be accessed from anywhere, we must have a mechanism to isolate data and protect it from client’s direct access. Brokered Cloud Storage Access is an approach for isolating storage in the cloud. In this approach, two services are created:  A broker with full access to storage but no access to client.  A proxy with no access to storage but access to both client and broker. Working Of Brokered Cloud Storage Access System When the client issues request to access data:  The client data request goes to the external service interface of proxy.  The proxy forwards the request to the broker.  The broker requests the data from cloud storage system.  The cloud storage system returns the data to the broker.  The broker returns the data to proxy.  Finally the proxy sends the data to the client. All of the above steps are shown in the following diagram:
  • 4. Encryption Encryption helps to protect data from being compromised. It protects data that is being transferred as well as data stored in the cloud. Although encryption helps to protect data from any unauthorized access, it does not prevent data loss. Cloud Security Challenges Cloud Computing is a type of technology that provides remote services on the internet to manage, access, and store data rather than storing it on Servers or local drives. This technology is also known as Server less technology. Here the data can be anything like Image, Audio, video, documents, files, etc. Need of Cloud Computing : Before using Cloud Computing, most of the large as well as small IT companies use traditional methods i.e. they store data in Server, and they need a separate Server room for that. In that Server Room, there should be a database server, mail server, firewalls, routers, modems, high net speed devices, etc. For that IT companies have to spend lots of money. In order to reduce all the problems with cost Cloud computing come into existence and most companies shift to this technology. Cloud Security: It is a set of control-based technologies & policies adapted to stick to regulatory compliances, rules & protect data application and cloud technology infrastructure. Because of cloud's nature of sharing resources, cloud security gives particular concern to identity management, privacy & access control. So the data in the cloud should have to be stored in an encrypted form. With the increase in the number of organizations using cloud technology for a data operation, proper security and other potentially vulnerable areas became a priority for organizations contracting with cloud providers. Cloud computing security processes the security control in cloud & provides customer data security, privacy & compliance with necessary regulations.
  • 5. Security Planning for Cloud Before using cloud technology, users should need to analyze several aspects. These are:  Analyze the sensitivity to risks of user's resources.  The cloud service models require the customer to be responsible for security at various levels of service.  Understand the data storage and transfer mechanism provided by the cloud service provider.  Consider proper cloud type to be used. Cloud Security Controls Cloud security becomes effective only if the defensive implementation remains strong. There are many types of control for cloud security architecture; the categories are listed below: 1. Detective Control: are meant to detect and react instantly & appropriately to any incident. 2. Preventive Control: strengthen the system against any incident or attack by actually eliminating the vulnerabilities. 3. Deterrent Control is meant to reduce attack on cloud system; it reduces the threat level by giving a warning sign. 4. Corrective Control reduces the consequences of an incident by controlling/limiting the damage. Restoring system backup is an example of such type. Security Issues in Cloud Computing : There is no doubt that Cloud Computing provides various Advantages but there are also some security issues in cloud computing. Below are some following Security Issues in Cloud Computing as follows. 1. Data Loss – Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we know that our sensitive data is in the hands of Somebody else, and we don’t have full control over our database. So if the security of cloud service is to break by hackers then it may be possible that hackers will get access to our sensitive data or personal files. 2. Interference of Hackers and Insecure API’s – As we know if we are talking about the cloud and its services it means we are talking about the Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is important to protect the Interface’s and API’s which are used by an external user. But also in cloud computing, few services are available in the public domain. An is the vulnerable part of Cloud Computing because it may be possible that these services are accessed by some third parties. So it may be possible that with the help of these services hackers can easily hack or harm our data.
  • 6. 3. User Account Hijacking – Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of User or an Organization is hijacked by Hacker. Then the hacker has full authority to perform Unauthorized Activities. 4. Changing Service Provider – Vendor lock In is also an important Security issue in Cloud Computing. Many organizations will face different problems while shifting from one vendor to another. For example, An Organization wants to shift from AWS Cloud to Google Cloud Services then they ace various problem’s like shifting of all data, also both cloud services have different techniques and functions, so they also face problems regarding that. Also, it may be possible that the charges of AWS are different from Google Cloud, etc. 5. Lack of Skill – While working, shifting to another service provider, need an extra feature, how to use a feature, etc. are the main problems caused in IT Company who doesn’t have skilled Employee. So it requires a skilled person to work with cloud Computing. 6. Denial of Service (DoS) attack – This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in large organizations such as the banking sector, government sector, etc. When a DoS attack occurs data is lost. So in order to recover data, it requires a great amount of money as well as time to handle it. Data Security Cloud data security refers to the technologies, policies, services and security controls that protect any type of data in the cloud from loss, leakage or misuse through breaches, exfiltration and unauthorized access. A robust cloud data security strategy should include:  Ensuring the security and privacy of data across networks as well as within applications, containers, workloads and other cloud environments  Controlling data access for all users, devices and software  Providing complete visibility into all data on the network The cloud data protection and security strategy must also protect data of all types. This includes:  Data in use: Securing data being used by an application or endpoint through user authentication and access control  Data in motion: Ensuring the safe transmission of sensitive, confidential or proprietary data while it moves across the network through encryption and/or other email and messaging security measures  Data at rest: Protecting data that is being stored on any network location, including the cloud, through access restrictions and user authentication
  • 7. Cloud computing threats to data security While cybersecurity threats that apply to on-premises infrastructure also extend to cloud computing, the cloud brings additional data security threats. Here are some of the common ones: Unsecure application programming interfaces (APIs)—many cloud services and applications rely on APIs for functionalities such as authentication and access, but these interfaces often have security weaknesses such as misconfigurations, opening the door to compromises. Account hijacking or takeover—many people use weak passwords or reuse compromised passwords, which gives cyberattackers easy access to cloud accounts. Insider threats—while these are not unique to the cloud, the lack of visibility into the cloud ecosystem increases the risk of insider threats, whether the insiders are gaining unauthorized access to data with malicious intent or are inadvertently sharing or storing sensitive data via the cloud. Types of data security Encryption Using an algorithm to transform normal text characters into an unreadable format, encryption keys scramble data so that only authorized users can read it. File and database encryption solutions serve as a final line of defense for sensitive volumes by obscuring their contents through encryption or tokenization. Most solutions also include security key management capabilities. Data Erasure More secure than standard data wiping, data erasure uses software to completely overwrite data on any storage device. It verifies that the data is unrecoverable. Data Masking By masking data, organizations can allow teams to develop applications or train people using real data. It masks personally identifiable information (PII) where necessary so that development can occur in environments that are compliant. Data Resiliency Resiliency is determined by how well an organization endures or recovers from any type of failure – from hardware problems to power shortages and other events that affect data availability (PDF, 256 KB). Speed of recovery is critical to minimize impact. Safeguards for data security in cloud computing Data security in the cloud starts with identity governance. You need a comprehensive, consolidated view of data access across your on-premises and cloud platforms and workloads. Identity governance provides:
  • 8. Visibility—the lack of visibility results in ineffective access control, increasing both your risks and costs. Federated access—this eliminates manual maintenance of separate identities by leveraging your Active Directory or other system of record. Monitoring—you need a way to determine if the access to cloud data is authorized and appropriate. Governance best practices include automating processes to reduce the burden on your IT team, as well as auditing your security tools routinely to ensure continuous risk mitigation as your environment evolves. In addition to governance, here are some other recommended data security safeguards for cloud computing: Deploy encryption. Ensure that sensitive and critical data, such as PII and intellectual property, is encrypted both in transit and at rest. Not all vendors offer encryption, and you should consider implementing a third-party encryption solution for added protection. Back up the data. While vendors have their own backup procedures, it’s essential to back up your cloud data locally as well. Use the 3-2-1 rule for data backup: Keep at least three copies, store them on at least two different media, and keep at least one backup offsite (in the case of the cloud, the offsite backup could be the one executed by the vendor). Implement identity and access management (IAM). Your IAM technology and policies ensure that the right people have appropriate access to data, and this framework needs to encompass your cloud environment. Besides identity governance, IAM components include access management (such as single sign-on, or SSO) and privileged access management. Manage your password policies. Poor password hygiene is frequently the cause of data breaches and other security incidents. Use password management solutions to make it simple for your employees and other end users to maintain secure password practices. Adopt multi-factor authentication (MFA). In addition to using secure password practices, MFA is a good way to mitigate the risk of compromised credentials. It creates an extra hurdle that threat actors must overcome as they try to gain entry to your cloud accounts. Business Risks to Storing Data in the Cloud Though storing data within the cloud offers organizations many important benefits, this environment is not without challenges. Here are some risks businesses may face of storing data in the cloud without the proper security measures in place: 1. Data breaches Data breaches occur differently in the cloud than in on-premises attacks. Malware is less relevant. Instead, attackers exploit misconfigurations, inadequate access, stolen credentials and other vulnerabilities.
  • 9. 2. Misconfigurations Misconfigurations are the No. 1 vulnerability in a cloud environment and can lead to overly permissive privileges on accounts, insufficient logging and other security gaps that expose organizations to cloud breaches, insider threats and adversaries who leverage vulnerabilities to gain access to data. 3. Unsecured APIs Businesses often use APIs to connect services and transfer data, either internally or to partners, suppliers, customers and others. Because APIs turn certain types of data into endpoints, changes to data policies or privilege levels can increase the risk of unauthorized access to more data than the host intended. 4. Access control/unauthorized access Organizations using multi-cloud environments tend to rely on default access controls of their cloud providers, which becomes an issue particularly in a multi-cloud or hybrid cloud environment. Inside threats can do a great deal of damage with their privileged access, knowledge of where to strike, and ability to hide their tracks. Cloud Data Security Best Practices To ensure the security of their data, organizations must adopt a comprehensive cybersecurity strategy that addresses data vulnerabilities specific to the cloud. Key elements of a robust cloud data security strategy include: 1. Leverage advanced encryption capabilities One effective way to protect data is to encrypt it. Cloud encryption transforms data from plain text into an unreadable format before it enters the cloud. Data should be encrypted both in transit and at rest. There are different out-of-the-box encryption capabilities offered by cloud service providers for data stored in block and object storage services. To protect the security of data-in-transit, connections to cloud storage services should be made using encrypted HTTPS/TLS connections. Data encryption is by default enabled in cloud platforms using platform-managed encryption keys. However, customers can gain additional control over this by bringing their own keys and managing them centrally via encryption key management services in the cloud. For organizations with stricter security standards and compliance requirements, they can implement native hardware security module (HSM)-enabled key management services or even third-party services for protecting data encryption keys.
  • 10. 2. Implement a data loss prevention (DLP) tool. Data loss prevention (DLP) is part of a company’s overall security strategy that focuses on detecting and preventing the loss, leakage or misuse of data through breaches, exfiltration and unauthorized access. A cloud DLP is specifically designed to protect those organizations that leverage cloud repositories for data storage. 3. Enable unified visibility across private, hybrid and multi-cloud environments. Unified discovery and visibility of multi-cloud environments, along with continuous intelligent monitoring of all cloud resources are essential in a cloud security solution. That unified visibility must be able to detect misconfigurations, vulnerabilities and data security threats, while providing actionable insights and guided remediation. 4. Ensure security posture and governance. Another key element of data security is having the proper security policy and governance in place that enforces golden cloud security standards, while meeting industry and government regulations across the entire infrastructure. A cloud security posture management (CSPM) solution that detects and prevents misconfigurations and control plane threats is essential for eliminating blind spots and ensuring compliance across clouds, applications and workloads. 5. Strengthen identity and access management (IAM). Identity and access management (IAM) helps organizations streamline and automate identity and access management tasks and enable more granular access controls and privileges. With an IAM solution, IT teams no longer need to manually assign access controls, monitor and update privileges, or deprovision accounts. Organizations can also enable a single sign-on (SSO) to authenticate the user’s identity and allow access to multiple applications and websites with just one set of credentials. When it comes to IAM controls, the rule of thumb is to follow the principle of least privilege, which means allowing required users to access only the data and cloud resources they need to perform their work. 6. Enable cloud workload protection. Cloud workloads increase the attack surface exponentially. Protecting workloads requires visibility and discovery of each workload and container events, while securing the entire cloud-native stack, on any cloud, across all workloads, containers, Kubernetes and serverless applications. Cloud workload protection (CWP) includes vulnerability scanning and management, and breach protection for workloads, including containers, Kubernetes and serverless functions, while enabling organizations to build, run and secure cloud applications from development to production.
  • 11. Application Security Cloud application security is the process of securing cloud-based software applications throughout the development lifecycle. It includes application-level policies, tools, technologies and rules to maintain visibility into all cloud-based assets, protect cloud- based applications from cyberattacks and limit access only to authorized users. Cloud application security is crucially important for organizations that are operating in a multi-cloud environment hosted by a third-party cloud provider such as Amazon, Microsoft or Google, as well as those that use collaborative web applications such as Slack, Microsoft Teams or Box. These services or applications, while transformational in nature to the business and its workforce, dramatically increase the attack surface, providing many new points of access for adversaries to enter the network and unleash attacks. The Need For Cloud Application Security Modern enterprise workloads are spread across a wide variety of cloud platforms ranging from suites of SaaS products like Google Workspaces and Microsoft 365 to custom cloud-native applications running across multiple hyper-scale cloud service providers. As a result, network perimeters are more dynamic than ever and critical data and workloads face threats that simply didn’t exist a decade ago. Enterprises must be able to ensure workloads are protected wherever they run. Additionally, cloud computing adds a new wrinkle to data sovereignty and data governance that can complicate compliance. Individual cloud service providers often offer security solutions for their platforms, but in a world where multi-cloud is the norm — a Gartner survey indicated over 80% of public cloud users use multiple providers — solutions that can protect an enterprise end-to-end across all platforms are needed. Cloud Application Security Threats  Account hijacking: Weak passwords and data breaches often lead to legitimate accounts being compromised. If an attacker compromises an account, they can gain access to sensitive data and completely control cloud assets.  Credential exposure: A corollary to account hijacking is credential exposure. As the SolarWinds security breach demonstrated, exposing credentials in the cloud (GitHub in this case) can lead to account hijacking and a wide range of sophisticated long-term attacks.
  • 12.  Bots and automated attacks: Bots and malicious scanners are an unfortunate reality of exposing any service to the Internet. As a result, any cloud service or web-facing application must account for the threats posed by automated attacks.  Insecure APIs: APIs are one of the most common mechanisms for sharing data — both internally and externally — in modern cloud environments. However, because APIs are often both feature and data- rich, they are a popular attack surface for hackers.  Oversharing of data: Cloud data storage makes it trivial to share data using URLs. This greatly streamlines enterprise collaboration. However, it also increases the likelihood of assets being accessed by unauthorized or malicious users.  DoS attacks: Denial of Service (DoS) attacks against large enterprises have been a cybersecurity threat for a long time. With so many modern organizations dependent on public cloud services, attacks against cloud service providers can now have an exponential impact.  Misconfiguration: One of the most common reasons for data breaches is misconfigurations. The frequency of misconfiguration in the cloud is due in large part to the complexity involved in configuration management (which leads to disjointed manual processes) and access control across cloud providers.  Phishing and social engineering: Phishing and social engineering attacks that exploit the human side of enterprise security are one of the most frequently exploited attack vectors.  Complexity and lack of visibility: Because many enterprise environments are multi-cloud, the complexity of configuration management, granular monitoring across platforms, and access control often lead to disjointed workflows that involve manual configuration and limit visibility which further exacerbates cloud security challenges. Types Of Cloud Application Security Solutions There is no shortage of security solutions designed to help enterprises mitigate cloud application security threats. For example, cloud access security brokers (CASBs) act as a gatekeeper to cloud services and enforce granular security policies. Similarly, web application firewalls (WAFs) and runtime application self-protection (RASP) to protect web apps, APIs, and individual applications. Additionally, many enterprises continue to leverage point appliances to implement firewalling, IPS/IDS, URL filtering, and threat detection. However, these solutions aren’t ideal for the
  • 13. modern cloud-native infrastructure as they are inherently inflexible and tied to specific locations. Web Application & API Protection (WAAP) has emerged as a more holistic and cloud-native solution that combines — and enhances — the functionality of WAFs, RASP, and traditional point solutions in a holistic multi-cloud platform. With WAAP, enterprises can automate and scale modern application security in a way legacy tooling simply cannot. Cloud Application Security Best Practices Enterprises must take a holistic approach to improve their cloud security posture. There’s no one-size-fits-all approach that will work for every organization, but there are several cloud application security best practices that all enterprises can apply. Here are some of the most important cloud app security best practices enterprises should consider:  Leverage MFA: Multi Factor authentication (MFA) is one of the most effective mechanisms for limiting the risk of account compromise.  Account for the human aspect: User error is one of the most common causes of data breaches. Taking a two-pronged approach of user education and implementing security tooling such as URL filters, anti-malware, and intelligent firewalls can significantly reduce the risk of social engineering leading to a catastrophic security issue.  Automate everything: Enterprises should automate cloud application monitoring, incident response, and configuration as much as possible. Manual workflows are error-prone and a common cause for oversight or leaked data.  Enforce the principle of least privilege: User accounts and applications should be configured to only access the assets required for their business function. Security policies should enforce the principle of least privilege across all cloud platforms. Leveraging enterprise identity management solutions and SSO (single-sign-on) can help enterprises scale this cloud application security best practice.  Use holistic multi-cloud solutions: Modern enterprise infrastructure is complex and enterprises need complete visibility to ensure a strong security posture across all platforms. This means choosing visibility and security tooling that isn’t inherently tied to a given location (e.g. point appliances) or cloud vendor is essential.
  • 14.  Don’t depend on signature matching alone: Many threat detection engines and anti-malware solutions depend on signature matching and basic business logic to detect malicious behavior. While detecting known threats is useful, in practice depending only on basic signature matching for threat detection is a recipe for false positives that can lead to alert fatigue and unnecessarily slow down operations. Additionally, reliance on signature mapping alone means enterprises have little to no protection against zero-day threats that don’t already have a known signature. Security tooling that can analyze behavior in-context, for example by using an AI engine, can both reduce false positives and decrease the odds of a zero-day threat being exploited. Virtual Machine Security Virtualized security, or security virtualization, refers to security solutions that are software- based and designed to work within a virtualized IT environment. This differs from traditional, hardware-based network security, which is static and runs on devices such as traditional firewalls, routers, and switches. In contrast to hardware-based security, virtualized security is flexible and dynamic. Instead of being tied to a device, it can be deployed anywhere in the network and is often cloud-based. This is key for virtualized networks, in which operators spin up workloads and applications dynamically; virtualized security allows security services and functions to move around with those dynamically created workloads. Cloud security considerations (such as isolating multitenant environments in public cloud environments) are also important to virtualized security. The flexibility of virtualized security is helpful for securing hybrid and multi-cloud environments, where data and workloads migrate around a complicated ecosystem involving multiple vendors. Benefits of virtualized security Virtualized security is now effectively necessary to keep up with the complex security demands of a virtualized network, plus it’s more flexible and efficient than traditional physical security. Here are some of its specific benefits:  Cost-effectiveness: Virtualized security allows an enterprise to maintain a secure network without a large increase in spending on expensive proprietary hardware. Pricing for cloud-based virtualized security services is often determined by usage, which can mean additional savings for organizations that use resources efficiently.  Flexibility: Virtualized security functions can follow workloads anywhere, which is crucial in a virtualized environment. It provides protection across multiple data centers and in multi-cloud and hybrid cloud environments, allowing an organization to take advantage of the full benefits of virtualization while also keeping data secure.  Operational efficiency:Quicker and easier to deploy than hardware-based security, virtualized security doesn’t require IT teams to set up and configure multiple hardware appliances. Instead, they can set up security systems through
  • 15. centralized software, enabling rapid scaling. Using software to run security technology also allows security tasks to be automated, freeing up additional time for IT teams.  Regulatory compliance:Traditional hardware-based security is static and unable to keep up with the demands of a virtualized network, making virtualized security a necessity for organizations that need to maintain regulatory compliance. How does virtualized security work? Virtualized security can take the functions of traditional security hardware appliances (such as firewalls and antivirus protection) and deploy them via software. In addition, virtualized security can also perform additional security functions. These functions are only possible due to the advantages of virtualization, and are designed to address the specific security needs of a virtualized environment. For example, an enterprise can insert security controls (such as encryption) between the application layer and the underlying infrastructure, or use strategies such as micro- segmentation to reduce the potential attack surface. Virtualized security can be implemented as an application directly on a bare metal hypervisor (a position it can leverage to provide effective application monitoring) or as a hosted service on a virtual machine. In either case, it can be quickly deployed where it is most effective, unlike physical security, which is tied to a specific device. What are the risks of virtualized security? The increased complexity of virtualized security can be a challenge for IT, which in turn leads to increased risk. It’s harder to keep track of workloads and applications in a virtualized environment as they migrate across servers, which makes it more difficult to monitor security policies and configurations. And the ease of spinning up virtual machines can also contribute to security holes. It’s important to note, however, that many of these risks are already present in a virtualized environment, whether security services are virtualized or not. Following enterprise security best practices (such as spinning down virtual machines when they are no longer needed and using automation to keep security policies up to date) can help mitigate such risks. How is physical security different from virtualized security? Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The traditional approach depends on devices deployed at strategic points across a network and is often focused on protecting the network perimeter (as with a traditional firewall). However, the
  • 16. perimeter of a virtualized, cloud-based network is necessarily porous and workloads and applications are dynamically created, increasing the potential attack surface. Traditional security also relies heavily upon port and protocol filtering, an approach that’s ineffective in a virtualized environment where addresses and ports are assigned dynamically. In such an environment, traditional hardware-based security is not enough; a cloud-based network requires virtualized security that can move around the network along with workloads and applications. What are the different types of virtualized security? There are many features and types of virtualized security, encompassing network security, application security, and cloud security. Some virtualized security technologies are essentially updated, virtualized versions of traditional security technology (such as next- generation firewalls). Others are innovative new technologies that are built into the very fabric of the virtualized network. Some common types of virtualized security features include:  Segmentation, or making specific resources available only to specific applications and users. This typically takes the form of controlling traffic between different network segments or tiers.  Micro-segmentation, or applying specific security policies at the workload level to create granular secure zones and limit an attacker’s ability to move through the network. Micro-segmentation divides a data center into segments and allows IT teams to define security controls for each segment individually, bolstering the data center’s resistance to attack.  Isolation, or separating independent workloads and applications on the same network. This is particularly important in a multitenant public cloud environment, and can also be used to isolate virtual networks from the underlying physical infrastructure, protecting the infrastructure from attack. Cloud infrastructure security Cloud infrastructure security is the practice of securing resources deployed in a cloud environment and supporting systems. Public cloud infrastructure is, in many ways, more vulnerable than on-premises infrastructure because it can easily be exposed to public networks, and is not located behind a secure network perimeter. However, in a private or hybrid cloud, security is still a challenge, as there are multiple security concerns due to the highly automated nature of the environment, and numerous integration points with public cloud systems. Cloud infrastructure is made up of at least 7 basic components, including user accounts, servers, storage systems, and networks. Cloud environments are dynamic, with short-lived resources created and terminated many times per day. This means each of these building blocks must be
  • 17. secured in an automated and systematic manner. Read on to learn best practices that can help you secure each of these components. Securing Public, Private, and Hybrid Clouds Cloud security has different implications in different cloud infrastructure models. Here are considerations for security in each of the three popular models—public cloud, private cloud, and hybrid cloud. Public Cloud Security In a public cloud, the cloud provider takes responsibility for securing the infrastructure, and provides tools that allow the organization to secure its workloads. Your organization is responsible for:  Securing workloads and data, fully complying with relevant compliance standards, and ensuring all activity is logged to enable auditing.  Ensuring cloud configurations remain secure, and any new resources on the cloud are similarly secured, using automated tools such as a Cloud Security Posture Management (CSPM) platform.  Understanding which service level agreements (SLA), supplied by your cloud provider, deliver relevant services and monitoring.  If you use services, machine images, container images, or other software from third-party providers, performing due diligence on their security measures and replacing providers if they are insufficient. Private Cloud Security The private cloud model gives you control over all layers of the stack. These resources are commonly not exposed to the public Internet. This means that you can achieve a certain level of security using traditional mechanisms that protect the corporate network perimeter. However, there are additional measures you should take to secure your private cloud:  Use cloud native monitoring tools to gain visibility over any anomalous behavior in your running workloads.  Monitor privileged accounts and resources for suspicious activity to detect insider threats. Malicious users or compromised accounts can have severe consequences in a private cloud, because of the ease at which resources can be automated.  Ensure complete isolation between virtual machines, containers, and host operating systems, to ensure that compromise of a VM or container does not allow compromise of the entire host.  Virtual machines should have dedicated NICs or VLANs, and hosts should communicate over the network using a separate network interface.  Plan ahead and prepare for hybrid cloud by putting security measures in place to ensure that you can securely integrate with public cloud services Hybrid Cloud Security Hybrid clouds are a combination of on-premise data center, public cloud, and private cloud. The following security considerations are important in a hybrid cloud environment:
  • 18.  Ensure public cloud systems are secured using all the best practices.  Private cloud systems should follow private cloud security best practices, as well as traditional network security measures for the local data center.  Avoid separate security strategies and tools in each environment—adopt a single security framework that can provide controls across the hybrid environment.  Identify all integration points between environments, treat them as high-risk components and ensure they are secured. Securing 7 Key Components of Your Cloud Infrastructure Here are key best practices to securing the key components of a typical cloud environment. Accounts Service accounts in the cloud are typically privileged accounts, which may have access to critical infrastructure. Once compromised, attackers have access to cloud networks and can access sensitive resources and data. Service accounts may be created automatically when you create new cloud resources, scale cloud resources, or stand up environments using infrastructure as code (IaC). The new accounts may have default settings, which in some cases means weak or no authentication. Use identity and access management (IAM) to set policies controlling access and authentication to service accounts. Use a cloud configuration monitoring tool to automatically detect and remediate non-secured accounts. Finally, monitor usage of sensitive accounts to detect suspicious activity and respond. Servers While a cloud environment is virtualized, behind the scenes it is made up of physical hardware deployed at multiple geographical locations. This includes physical servers, storage devices, load balancers, and network equipment like switches and routers. Here are a few ways to secure a cloud server, typically deployed using a compute service like Amazon EC2:  Control inbound and outbound communication—your server should only be allowed to connect to networks, and specific IP ranges needed for its operations. For example, a database server should not have access to the public internet, or any other IP, except those of the application instances it serves.  Encrypt communications—whether communications go over public networks or within a secure private network, they should be encrypted to avoid man in the middle (MiTM) attacks. Never use unsecured protocols like Telnet or FTP. Transmit all data over HTTPS, or other secure protocols like SCP (Secure Copy) or SFTP (Secure FTP).  Use SSH keys—avoid accessing cloud servers using passwords, because they are vulnerable to brute force attacks and can easily be compromised. Use SSH keys, which leverage public/private key cryptography for more secure access.  Minimize privileges—only users or service roles that absolutely need access to a server should be granted access. Carefully control the access level of each account to ensure it can only access the specific files and folders, and perform specific operations, needed for their role. Avoid using the root user—any operation should be performed using identified user accounts.
  • 19. Hypervisors A hypervisor runs on physical hardware, and makes it possible to run several virtual machines (VMs), each with a separate operating system. All cloud systems are based on hypervisors. Therefore, hypervisors are a key security concern, because compromise of the hypervisor (an attack known as hyperjacking) gives the attacker access to all hosts and virtual machines running on it. In public cloud systems, hypervisor security is the responsibility of the cloud provider, so you don’t need to concern yourself with it. There is one exception—when running virtualized workloads on a public cloud, using systems like VMware Cloud, you are responsible for securing the hypervisor. In private cloud systems, the hypervisor is always under your responsibility. Here are a few ways to ensure your hypervisor is secure:  Ensure machines running hypervisors are hardened, patched, isolated from public networks, and physically secured in your data center  Assign least privileges to local user accounts, carefully controlling access to the hypervisor  Harden, secure, and closely monitor machines running the virtual machine monitor (VMM) and virtualization management software, such as VMware vSphere  Secure and monitor shared hardware caches and networks used by the hypervisor  Pay special attention to hypervisors in development and testing environments—ensure appropriate security measures are applied when a new hypervisor is deployed to production Storage In cloud systems, virtualization is used to abstract storage from hardware systems. Storage systems become elastic pools of storage, or virtualized resources that can be provisioned and scaled automatically. Here are a few ways to secure your cloud storage services:  Identify which devices or applications connect to cloud storage, which cloud storage services are used throughout the organization, and map data flows.  Block access to cloud storage for internal users who don’t need it, and eliminate shadow usage of cloud services by end users.  Classify data into sensitivity levels—a variety of automated tools are available. This can help you focus on data stored in cloud storage that has security or compliance implications.  Remove unused data—cloud storage can easily scale and it is common to retain unnecessary data, or entire data volumes or snapshots that are no longer used. Identify this unused data and eliminate it to reduce the attack surface and your compliance obligations.  Carefully control access to data using identity and access management (IAM) systems, and applying consistent security policies for cloud and on-premises systems.  Use cloud data loss prevention (DLP) tools to detect and block suspicious data transfers, data modification or deletion, or data access, whether malicious or accidental.
  • 20. Databases Databases in the cloud can easily be exposed to public networks, and almost always contain sensitive data, making them an imminent security risk. Because databases are closely integrated with the applications they serve and other cloud systems, those adjacent systems must also be secured to prevent compromise of the database. Here are a few ways to improve security of databases in the cloud:  Hardening configuration and instances—if you deploy a database yourself in a compute instance, it is your responsibility to harden the instance and securely configure the database. If you use a managed database service, these concerns are typically handled by the cloud provider.  Database security policies—ensure database settings are in line with your organization’s security and compliance policies. Map your security requirements and compliance obligations to specific settings on cloud database systems. Use automated tools like CSPM to ensure secure settings are applied to all database instances.  Network access—as a general rule, databases should never be exposed to public networks and should be isolated from unrelated infrastructure. If possible, a database should only accept connections from the specific application instances it is intended to serve.  Permissions—grant only the minimal level of permissions to users, applications and service roles. Avoid “super users” and administrative users with blanket permissions. Each administrator should have access to the specific databases they work on.  End user device security—security is not confined to the cloud environment. You should be aware what endpoint devices administrators are using to connect to your database. Those devices should be secured, and you should disallow connections from unknown or untrusted devices, and monitor sessions to detect suspicious activity. Network Here are a few ways you can secure cloud networks: Cloud systems often connect to public networks, but also use virtual networks to enable communication between components inside a cloud. All public cloud providers let you set up a secure, virtual private network for your cloud resources ( called a VPC in Amazon and a VNet in Azure).  Use security groups to define rules that define what traffic can flow between cloud resources. Keep in mind that security groups are tightly connected to compute instances, and compromise of an instance grants access to the security group configuration, so additional security layers are needed.  Use Network Access Control Lists (ACL) to control access to virtual private networks. ACLs provide both allow and deny rules, and provide stronger security controls than security groups.  Use additional security solutions such as firewalls as a service (FWaaS) and web application firewalls (WAF) to actively detect and block malicious traffic.  Deploy Cloud Security Posture Management (CSPM) tools to automatically review cloud networks, detect non-secure or vulnerable configurations and remediate them.
  • 21. Kubernetes When running Kubernetes on the cloud, it is almost impossible to separate the Kubernetes cluster from other cloud computing layers. These include the application or code itself, container images, compute instances, and network layers. Each layer is built on top of the previous layer, and all layers must be protected for defense in depth. The Kubernetes project recommends approaching security from four angles, known as the “4 Cs”:  Code—ensuring code in containers is not malicious and uses secure coding practices  Containers—scanning container images for vulnerabilities, and protecting containers at runtime to ensure they are configured securely according to best practices  Clusters—protecting Kubernetes master nodes and ensuring cluster configuration is in line with security best practices  Cloud—using cloud provider tools to secure the underlying infrastructure, including compute instances and virtual private clouds (VPC) Compliance with security best practices, industry standards and benchmarks, and internal organizational strategies in a cloud-native environment also face challenges. In addition to maintaining compliance, organizations must also provide evidence of compliance. You need to adjust your strategy so that your Kubernetes environment fits the controls originally created for your existing application architecture. Learn more in our detailed guide to Kubernetes security best practices › Aqua Cloud Security Posture Management (CSPM) Scan, monitor and remediate configuration issues in public cloud accounts according to best practices and compliance standards, across AWS, Azure, Google Cloud, and Oracle Cloud.CSPM Eliminate misconfigurations in your public cloud accounts Aqua CSPM provides automated, multi-cloud security posture management to scan, validate, monitor, and remediate configuration issues in your public cloud accounts. Aqua CSPM ensures the use of best practices and compliance standards across AWS, Azure, Google Cloud, and Oracle Cloud — including Infrastructure-as-code templates. Protect against:  Servers exposed publicly to the internet  Unencrypted data storage  Lack of least-privilege policies  Poor password policies or missing MFA  Misconfigured backup/restore settings Multi-cloud visibility – Gain visibility across all your cloud accounts
  • 22. Aqua CSPM continuously audits your cloud accounts for security risks and misconfigurations to assess your infrastructure risk and compliance posture. It provides checks across hundreds of configuration settings and compliance best practices to ensure consistent, unified multi- cloud security. Rapid remediation – Find and fix misconfigurations before they’re exploited Aqua provides self-securing capabilities to ensure your cloud accounts don’t drift out of compliance. Get detailed, actionable advice and alerts, or choose automated remediation of misconfigured services with granular control over chosen fixes. Enterprise scale – Unify security across VMs, containers, and serverless Protect applications in runtime on any cloud, orchestrator, or operating system using a zero- trust model that provides granular control to accurately detect and stop attacks. Leverage micro-services concepts to enforce immutability and micro-segmentation. Infrastructure Security – The Network Level Network Infrastructure Security, typically applied to enterprise IT environments, is a process of protecting the underlying networking infrastructure by installing preventative measures to deny unauthorized access, modification, deletion, and theft of resources and data. These security measures can include access control, application security, firewalls, virtual private networks (VPN), behavioral analytics, intrusion prevention systems, and wireless security. How does Network Infrastructure Security work? Network Infrastructure Security requires a holistic approach of ongoing processes and practices to ensure that the underlying infrastructure remains protected. The Cybersecurity and Infrastructure Security Agency (CISA) recommends considering several approaches when addressing what methods to implement.  Segment and segregate networks and functions - Particular attention should be paid to the overall infrastructure layout. Proper segmentation and segregation is an effective security mechanism to limit potential intruder exploits from propagating into other parts of the internal network. Using hardware such as routers can separate networks creating boundaries that filter broadcast traffic. These micro- segments can then further restrict traffic or even be shut down when attacks are detected. Virtual separation is similar in design as physically separating a network with routers but without the required hardware.  Limit unnecessary lateral communications - Not to be overlooked is the peer- to-peer communications within a network. Unfiltered communication between peers could allow intruders to move about freely from computer to computer. This affords attackers the opportunity to establish persistence in the target network by embedding backdoors or installing applications.  Harden network devices - Hardening network devices is a primary way to enhance network infrastructure security. It is advised to adhere to industry standards and best practices regarding network encryption, available services,
  • 23. securing access, strong passwords, protecting routers, restricting physical access, backing up configurations, and periodically testing security settings.  Secure access to infrastructure devices - Administrative privileges are granted to allow certain trusted users access to resources. To ensure the authenticity of the users by implementing multi-factor authentication (MFA), managing privileged access, and managing administrative credentials.  Perform out-of-band (OoB) network management - OoB management implements dedicated communications paths to manage network devices remotely. This strengthens network security by separating user traffic from management traffic.  Validate integrity of hardware and software - Gray market products threaten IT infrastructure by allowing a vector for attack into a network. Illegitimate products can be pre-loaded with malicious software waiting to be introduced into an unsuspecting network. Organizations should regularly perform integrity checks on their devices and software. Why is Network Infrastructure Security important? The greatest threat of network infrastructure security is from hackers and malicious applications that attack and attempt to gain control over the routing infrastructure. Network infrastructure components include all the devices needed for network communications, including routers, firewalls, switches, servers, load-balancers, intrusion detection systems (IDS), domain name system (DNS), and storage systems. Each of these systems presents an entry point to hackers who want to place malicious software on target networks.  Gateway Risk: Hackers who gain access to a gateway router can monitor, modify, and deny traffic in and out of the network.  Infiltration Risk: Gaining more control from the internal routing and switching devices, a hacker can monitor, modify, and deny traffic between key hosts inside the network and exploit the trusted relationships between internal hosts to move laterally to other hosts. Although there are any number of damaging attacks that hackers can inflict on a network, securing and defending the routing infrastructure should be of primary importance in preventing deep system infiltration. What are the benefits of Network Infrastructure Security? Network infrastructure security, when implemented well, provides several key benefits to a business’s network.  Improved resource sharing saves on costs: Due to protection, resources on the network can be utilized by multiple users without threat, ultimately reducing the cost of operations.  Shared site licenses: Security ensures that site licenses would be cheaper than licensing every machine.  File sharing improves productivity: Users can securely share files across the internal network.  Internal communications are secure: Internal email and chat systems will be protected from prying eyes.
  • 24.  Compartmentalization and secure files: User files and data are now protected from each other, compared with using machines that multiple users share.  Data protection: Data back-up to local servers is simple and secure, protecting vital intellectual property. What are the different types of Network Infrastructure Security? A variety of approaches to network infrastructure security exist, it is best to adhere to multiple approaches to broaden network defense.  Access Control: The prevention of unauthorized users and devices from accessing the network.  Application Security: Security measures placed on hardware and software to lock down potential vulnerabilities.  Firewalls: Gatekeeping devices that can allow or prevent specific traffic from entering or leaving the network.  Virtual Private Networks (VPN): VPNs encrypt connections between endpoints creating a secure “tunnel” of communications over the internet.  Behavioral Analytics: These tools automatically detect network activity that deviates from usual activities.  Wireless Security: Wireless networks are less secure than hardwired networks, and with the proliferation of new mobile devices and apps, there are ever- increasing vectors for network infiltration. Infrastructure Security – The Host Level When reviewing host security and assessing risks, the context of cloud services delivery models (SaaS, PaaS, and IaaS) and deployment models public, private, and hybrid) should be considered [7]. The host security responsibilities in SaaS and PaaS services are transferred to the provider of cloud services. IaaS customers are primarily responsible for securing the hosts provisioned in the cloud (virtualization software security, customer guest OS or virtual server security). Infrastructure Security – The Application Level Application or software security should be a critical element of a security program. Most enterprises with information security programs have yet to institute an application security program to address this realm. Designing and implementing applications aims at deployment on a cloud platform will require existing application security programs to re-evaluate current practices and standards. The application security spectrum ranges from standalone single-user applications to sophisticated multiuser e-commerce applications used by many users. The level is responsible for managing [7], [9], [10]: • Application-level security threats; • End user security; • SaaS application security; • PaaS application security; • Customer-deployed application security
  • 25. • IaaS application security • Public cloud security limitations It can be summarized that the issues of infrastructure security and cloud computing lie in the area of definition and provision of security specified aspects each party delivers. Application level security refers to those security services that are invoked at the interface between an application and a queue manager to which it is connected. These services are invoked when the application issues MQI calls to the queue manager. The services might be invoked, directly or indirectly, by the application, the queue manager, another product that supports IBM® MQ, or a combination of any of these working together. Application level security is illustrated in Figure 1. Application level security is also known as end-to-end security or message level security. Here are some examples of application level security services:  When an application puts a message on a queue, the message descriptor contains a user ID associated with the application. However, there is no data present, such as an encrypted password, that can be used to authenticate the user ID. A security service can add this data. When the message is eventually retrieved by the receiving application, another component of the service can authenticate the user ID using the data that has travelled with the message. This is an example of an identification and authentication service.  A message can be encrypted when it is put on a queue by an application and decrypted when it is retrieved by the receiving application. This is an example of a confidentiality service.  A message can be checked when it is retrieved by the receiving application. This check determines whether its contents have been deliberately modified since it was first put on a queue by the sending application. This is an example of a data integrity service.  Planning for Advanced Message Security Advanced Message Security ( AMS) is a component of IBM MQ that provides a high level of protection for sensitive data flowing through the IBM MQ network, while not impacting the end applications.  Providing your own application level security You can provide your own application level security services. To help you implement application level security, IBM MQ provides two exits, the API exit and the API- crossing exit.
  • 26. Azure Firewall Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual network resources allowing outside firewalls to identify traffic originating from your virtual network. The service is fully integrated with Azure Monitor for logging and analytics. According to the Shared Responsibilities for Cloud Computing, while Microsoft is responsible for maintaining the security of the infrastructure on which their cloud runs, users are also responsible for the resources that they use on the cloud. Users are, thus, required to make use of services that ensure the security of the resources on the cloud. There are measures to tackle security challenges in the cloud as well, just like the firewall in your Windows PC that you might have encountered, on multiple occasions, warning you about blocking certain applications, deemed a threat, from accessing the network. Azure Firewall is one such network security service from Microsoft Azure that monitors and takes action for unwanted network activities on the cloud.
  • 27. Since Azure Firewall is a cloud-based service, it has the capabilities to be highly available and scaled-up as and when required. Azure Firewall is also integrated with Azure Monitor so that the latter’s abilities in logging and analytics can be used for maintaining strict security. Azure Firewall gives a unified solution to create and enforce policies for secure network connection across services and subscriptions in Azure. There is also an Azure Web Application Firewall that is specific to Application Gateway in Azure. While the Azure Firewall looks over the whole cloud against exploitations, the Azure Web Application Firewall works specifically to protect the web apps against vulnerabilities. Check out this Azure Certification Course to learn more about Azure curated by Industry experts! Features of Azure Firewall The features of Azure Firewall that make it stand out are:  High availability: No extra configuration or additional services are required for Azure Firewall. It has very high uptime and is fully managed.  Availability zones: A firewall can be made available across multiple availability zones or it can be restricted to particular zones based on your requirements. There is no additional charge for this, however, the data transfer rates can change depending on the zones.  Scalability: The firewall can be scaled for adjusting to the varying network requirements.  Traffic filtering rules: Rules can be specified based on IP address, ports, etc., for allowing or preventing connections. Azure Firewall can distinguish among packets from different connections and enforce the rules to allow or deny them.  FQDN tags: Fully qualified domain name (FQDN) tags can be given to trusted sources that need to be allowed through the firewall. Rules can be created based on this, which will filter traffic from qualified domains to pass through.
  • 28.  Service tags: These are labels that indicate a range of IP addresses for Azure Key Vault, Container Registry, and other services. These are Microsoft-managed and cannot be changed. The firewall allows filtering rules based on these.  Threat intelligence: Microsoft has a maintained threat intelligence field that lists sources and domains deemed as malicious. Azure Firewall can filter connections to deny them or alert the users based on this.  Multiple public IP addresses: Multiple IP addresses, up to 250, can be added to Azure Firewall. This enables the features of DNAT and SNAT in your firewall.  Azure Monitor logging: Azure Firewall is tightly integrated with Azure Monitor. Hence, all events are logged and these logs can be archived to storage accounts or streamed to event hubs, etc.  Web categories: The administrators can allow or deny access to certain websites based on the category to which they belong. This can be social media websites, gaming websites, and others.  Certifications: Payment card industry (PCI,) service organization controls (SOC,) International Organization for Standardization (ISO,) and ICSA Labs certifications are all available for Azure Firewall. If you want to learn Azure concepts, please refer to our blog on Azure Tutorial! Azure Firewall vs NSG First of all, you need to know what an NSG is. NSG stands for network security group; it can be used in filtering network traffic in the Azure cloud. NSG contains rules based on IP addresses, ports, etc., which can allow or deny connections to and from Azure Resources. Azure Firewall and NSG seem pretty similar; so, let us compare them side by side. Features Azure Firewall NSG Rule-based filtering Firewall supports rule- based filtering NSG also supports rule-based filtering FQDN tags Firewall supports FQDN tags NSG does not support FQDN tags
  • 29. Service tags Firewall supports service tags NSG also supports service tags Threat-intelligence-based filtering Firewall supports threat- intelligence-based filtering NSG does not support threat- intelligence-based filtering Destination and source network address translation (DNAT and SNAT) Firewall supports DNAT and SNAT NSG does not support DNAT and SNAT Azure Monitor integration The firewall is well- integrated with Azure Monitor NSG also has Azure Monitor integration From the comparison, it can be inferred that NSG lacks some features that Firewall has, and this makes Azure Firewall a more robust solution for cloud security. Even though NSG lacks a few features, Azure Firewall and NSG are not mutually exclusive, but they can complement each other in providing the best protection for your Azure cloud resources. Azure Firewall Limitations Even though Azure Firewall is a rich and robust feature, it still has some limitations. The limitations are:  Although it supports threat-intelligence-based filtering, Azure Firewall does not have IPS support, which many organizations require.  Azure Firewall uses public DNS servers to look up domains, and it cannot be configured to use internal DNS servers.  Azure Firewall can also be costly for some businesses. Load Balancer Cloud Load balancing is the process of distributing workloads and computing resources across one or more servers. This kind of distribution ensures maximum throughput in minimum response time. The workload is segregated among two or more servers, hard drives, network interfaces or other computing resources, enabling better resource utilization and system response time. Thus, for a high traffic website, effective use of cloud load balancing can ensure business continuity. The common objectives of using load balancers are:  To maintain system firmness.
  • 30.  To improve system performance.  To protect against system failures. Cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google offer cloud load balancing to facilitate easy distribution of workloads. For ex: AWS offers Elastic Load balancing (ELB) technology to distribute traffic among EC2 instances. Most of the AWS powered applications have ELBs installed as key architectural component. Similarly, Azure’s Traffic Manager allocates its cloud servers’ traffic across multiple data centres. How does load balancing work? Here, load refers to not only the website traffic but also includes CPU load, network load and memory capacity of each server. A load balancing technique makes sure that each system in the network has same amount of work at any instant of time. This means neither any of them is excessively over-loaded, nor under-utilized. The load balancer distributes data depending upon how busy each server or node is. In the absence of a load balancer, the client must wait while his process gets processed, which might be too tiring and demotivating for him. Various information like jobs waiting in queue, CPU processing rate, job arrival rate etc. are exchanged between the processors during the load balancing process. Failure in the right application of load balancers can lead to serious consequences, data getting lost being one of them. Different companies may use different load balancers and multiple load balancing algorithms like static and dynamic load balancing. One of the most commonly used methods is Round- robin load balancing. It forwards client request to each connected server in turn. On reaching the end, the load balancer loops back and repeats the list again. The major benefit is its ease of implementation. The load balancers check the system heartbeats during set time intervals to verify whether each node is performing well or not.
  • 31. Different Types of Load Balancing Algorithms in Cloud Computing: 1. Static Algorithm Static algorithms are built for systems with very little variation in load. The entire traffic is divided equally between the servers in the static algorithm. This algorithm requires in-depth knowledge of server resources for better performance of the processor, which is determined at the beginning of the implementation. However, the decision of load shifting does not depend on the current state of the system. One of the major drawbacks of static load balancing algorithm is that load balancing tasks work only after they have been created. It could not be implemented on other devices for load balancing. 2. Dynamic Algorithm The dynamic algorithm first finds the lightest server in the entire network and gives it priority for load balancing. This requires real-time communication with the network which can help increase the system's traffic. Here, the current state of the system is used to control the load. The characteristic of dynamic algorithms is to make load transfer decisions in the current system state. In this system, processes can move from a highly used machine to an underutilized machine in real time. 3. Round Robin Algorithm As the name suggests, round robin load balancing algorithm uses round-robin method to assign jobs. First, it randomly selects the first node and assigns tasks to other nodes in a round-robin manner. This is one of the easiest methods of load balancing. Processors assign each process circularly without defining any priority. It gives fast response in case of uniform workload distribution among the processes. All processes have different loading times. Therefore, some nodes may be heavily loaded, while others may remain under- utilised. 4. Weighted Round Robin Load Balancing Algorithm Weighted Round Robin Load Balancing Algorithms have been developed to enhance the most challenging issues of Round Robin Algorithms. In this algorithm, there are a specified set of weights and functions, which are distributed according to the weight values. Processors that have a higher capacity are given a higher value. Therefore, the highest loaded servers will get more tasks. When the full load level is reached, the servers will receive stable traffic. 5. Opportunistic Load Balancing Algorithm
  • 32. The opportunistic load balancing algorithm allows each node to be busy. It never considers the current workload of each system. Regardless of the current workload on each node, OLB distributes all unfinished tasks to these nodes. The processing task will be executed slowly as an OLB, and it does not count the implementation time of the node, which causes some bottlenecks even when some nodes are free. 6. Minimum To Minimum Load Balancing Algorithm Under minimum to minimum load balancing algorithms, first of all, those tasks take minimum time to complete. Among them, the minimum value is selected among all the functions. According to that minimum time, the work on the machine is scheduled. Other tasks are updated on the machine, and the task is removed from that list. This process will continue till the final assignment is given. This algorithm works best where many small tasks outweigh large tasks. Load balancing solutions can be categorized into two types - o Software-based load balancers: Software-based load balancers run on standard hardware (desktop, PC) and standard operating systems. o Hardware-based load balancers: Hardware-based load balancers are dedicated boxes that contain application-specific integrated circuits (ASICs) optimized for a particular use. ASICs allow network traffic to be promoted at high speeds and are often used for transport-level load balancing because hardware-based load balancing is faster than a software solution. Major Examples of Load Balancers - o Direct Routing Request Despatch Technique: This method of request dispatch is similar to that implemented in IBM's NetDispatcher. A real server and load balancer share a virtual IP address. The load balancer takes an interface built with a virtual IP address that accepts request packets and routes the packets directly to the selected server. o Dispatcher-Based Load Balancing Cluster: A dispatcher performs smart load balancing using server availability, workload, capacity and other user-defined parameters to regulate where TCP/IP requests are sent. The dispatcher module of a load balancer can split HTTP requests among different nodes in a cluster. The dispatcher divides the load among multiple servers in a cluster, so services from different nodes act like a virtual service on only one IP address; Consumers interconnect as if it were a single server, without knowledge of the back-end infrastructure.
  • 33. o Linux Virtual Load Balancer: This is an open-source enhanced load balancing solution used to build highly scalable and highly available network services such as HTTP, POP3, FTP, SMTP, media and caching, and Voice over Internet Protocol (VoIP) is done. It is a simple and powerful product designed for load balancing and fail-over. The load balancer itself is the primary entry point to the server cluster system. It can execute Internet Protocol Virtual Server (IPVS), which implements transport-layer load balancing in the Linux kernel, also known as layer-4 switching. Types of Load Balancing You will need to understand the different types of load balancing for your network. Server load balancing is for relational databases, global server load balancing is for troubleshooting in different geographic locations, and DNS load balancing ensures domain name functionality. Load balancing can also be based on cloud-based balancers. Network Load Balancing Cloud load balancing takes advantage of network layer information and leaves it to decide where network traffic should be sent. This is accomplished through Layer 4 load balancing, which handles TCP/UDP traffic. It is the fastest local balancing solution, but it cannot balance the traffic distribution across servers. HTTP(S) load balancing HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7. This means that load balancing operates in the layer of operations. It is the most flexible type of load balancing because it lets you make delivery decisions based on information retrieved from HTTP addresses. Internal Load Balancing It is very similar to network load balancing, but is leveraged to balance the infrastructure internally. Load balancers can be further divided into hardware, software and virtual load balancers. Hardware Load Balancer It depends on the base and the physical hardware that distributes the network and application traffic. The device can handle a large traffic volume, but these come with a hefty price tag and have limited flexibility. Software Load Balancer It can be an open source or commercial form and must be installed before it can be used. These are more economical than hardware solutions.
  • 34. Virtual Load Balancer It differs from a software load balancer in that it deploys the software to the hardware load- balancing device on the virtual machine. advantages of Cloud Load Balancing a) High Performing applications Cloud load balancing techniques, unlike their traditional on-premise counterparts, are less expensive and simple to implement. Enterprises can make their client applications work faster and deliver better performances, that too at potentially lower costs. b) Increased scalability Cloud balancing takes help of cloud’s scalability and agility to maintain website traffic. By using efficient load balancers, you can easily match up the increased user traffic and distribute it among various servers or network devices. It is especially important for ecommerce websites, who deals with thousands of website visitors every second. During sale or other promotional offers they need such effective load balancers to distribute workloads. c) Ability to handle sudden traffic spikes A normally running University site can completely go down during any result declaration. This is because too many requests can arrive at the same time. If they are using cloud load balancers, they do not need to worry about such traffic surges. No matter how large the request is, it can be wisely distributed among different servers for generating maximum results in less response time. d) Business continuity with complete flexibility The basic objective of using a load balancer is to save or protect a website from sudden outages. When the workload is distributed among various servers or network units, even if one node fails the burden can be shifted to another active node. Thus, with increased redundancy, scalability and other features load balancing easily handles website or application traffic. Traffic Manager Microsoft Azure Traffic Manager enables customers to control the user traffic distribution of multiple service endpoints situated in data centers across the world. Cloud services, Web Apps, and Azure VMs are among the service endpoints supported by Azure Traffic Manager. Users can utilize both the Azure Traffic Manager and non-Azure external endpoints. By using the traffic-routing mechanism, It uses the DNS (Domain Name System) to send client requests to the most appropriate endpoint. Why do we use Azure Traffic Manager? Depending on the selected routing method, Azure traffic management selects an endpoint.
  • 35.  To fulfill the needs of varied applications, it provides a wide range of traffic-routing techniques.  After choosing an endpoint, the client is immediately connected to the appropriate service point.  Additionally included are automatic failover and endpoint health checks.  Additionally, it enables you to build extremely robust applications that can continue to function even if the entire Azure region goes down. Features of the Azure Traffic Manager It is used to deliver network traffic load balancing and management services in cloud-based systems. Azure Traffic Manager is mostly used in;  Ensure Availability and Reduce Downtime – Traffic Manager supports automated failover for Azure Cloud Services, Azure Websites, and other defined endpoints.  Upgrade / Maintain Endpoints Without Downtime – Traffic Manager allows endpoints to be automatically paused when there is no activity, allowing developers and IT administrators to upgrade and test the endpoint without downtime.  Combination of hybrid applications- When used with hybrid cloud as well as on installations like “migrate-to-cloud,””burst-to-cloud,” and “failover-to-cloud,” Microsoft Azure Traffic Manager can now work with non-Azure destinations.  Distribute Traffic – Traffic may be dispersed over many data centers or Azure destinations using nested profiles.  Increase the availability of applications- By keeping track of your endpoints and offering automated failover when one fails, it guarantees the availability of your critical programs.  Enhance application performance- Running cloud services or websites in data centers all around the world is made possible by Azure. Traffic is directed to the endpoint with the smallest client propagation delay, which enhances application performance. Routing Methods of the Azure Traffic Manager The traffic is distributed by Azure Traffic Manager depending on one of six traffic-routing mechanisms that decide which destination is returned in the DNS response.
  • 36. There are the following traffic routing strategies available:  Priority  The primary service endpoint has the top priority with all traffic when you select the priority routing strategy, which displays a prioritized list of service endpoints.  Traffic is forwarded to the endpoint with the next greatest priority if the primary service endpoint is unavailable.
  • 37.  Weighted  Weighted routing is used when you want to evenly distribute traffic or apply pre- determined weights to a group of destinations.  This traffic-routing method involves giving each endpoint a weight, which is a number between 200 and 2000, in the Microsoft Azure Traffic Manager profile option.  Performance  By sending traffic to the location closest to the user, this traffic routing technology helps various apps respond more quickly.  The ‘nearest’ endpoint isn’t usually the one that is physically closest.  On the other hand, the “Performance” traffic-routing strategy chooses the nearest destination by analysing network latency.
  • 38.  Multivalue  You can select MultiValue if your Azure Traffic Manager profiles only include IPv2 or IPv4 addresses as destinations.  All appropriate endpoints are retrieved when a request for this profile is made.  Geographic  In geographic routing, a set of geographic areas must be assigned to each endpoint associated with that profile.  Any requests from such locations are only directed to that endpoint when a region or group of regions is assigned to it.
  • 39.  Subnet  Use the Subnet traffic-routing method to associate groups of end-user IP address ranges with a particular endpoint inside an Azure Traffic Manager profile.  A request is received, and the endpoint that responds is the one that corresponds to the request’s originating IP address. Benefits of Azure Traffic Manager  Enhances the usability of critical applications.  Endpoint monitoring and automated failover provide extraordinarily high application availability in the event that any endpoint fails.  Improves responsiveness for high-performance applications.  The Traffic Manager distributes traffic according to the optimum traffic-routing algorithm for the circumstance.  Endpoint health is constantly monitored.  Automatic failover in the event that the endpoints fail.
  • 40. Network Security Groups and Application Security Groups Network Security Group Azure network security group (NSG) is used to filter network traffic between Azure resources in an Azure virtual network. We can create security rules to allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol. Security rules are evaluated and applied based on the five-tuple (source, source port, destination, destination port, and protocol) information. You can’t create two security rules with the same priority and direction. A flow record is created for existing connections. Communication is allowed or denied based on the connection state of the flow record. The flow record allows a network security group to be stateful. An Azure network security group is nothing more than a set of access control rules that may be used to secure a subnet or a virtual network; these rules examine incoming and outgoing traffic to determine whether to accept or reject a package. The VM-level Network security group and the subnet-level Network security group are the two levels that make up Azure network security.  Microsoft’s completely managed solution, Azure Network Security Groups, helps to sift traffic to and from Azure VNet.  Any number of security rules that make up the Azure NSG can be enabled or disabled by users.  A five-tuple hash is used to assess these rules’ effectiveness.  The 5-tuple hash uses the destination IP address and port number, source port number, IP addresses, as well as other factors.  You can quickly link Network Security Groups with a VNet or VM network interface thanks to its OSI layer 3 and layer 4 functionality.
  • 41. Reference the above picture, along with the following text, to understand how Azure processes inbound and outbound rules for network security groups: Inbound traffic For inbound traffic, Azure processes the rules in a network security group associated to a subnet first, if there’s one, and then the rules in a network security group associated to the network interface, if there’s one. This includes intra-subnet traffic as well.  VM1: The security rules in NSG1 are processed, since it’s associated to Subnet1 and VM1 is in Subnet1. Unless you’ve created a rule that allows port 80 inbound, the traffic is denied by the DenyAllInbound default security rule, and never evaluated by NSG2, since NSG2 is associated to the network interface. If NSG1 has a security rule that allows port 80, the traffic is then processed by NSG2. To allow port 80 to the virtual machine, both NSG1 and NSG2 must have a rule that allows port 80 from the internet.  VM2: The rules in NSG1 are processed because VM2 is also in Subnet1. Since VM2 doesn’t have a network security group associated to its network interface, it receives all traffic allowed through NSG1 or is denied all traffic denied by NSG1. Traffic is either allowed or denied to all resources in the same subnet when a network security group is associated to a subnet.
  • 42.  VM3: Since there’s no network security group associated to Subnet2, traffic is allowed into the subnet and processed by NSG2, because NSG2 is associated to the network interface attached to VM3.  VM4: Traffic is allowed to VM4, because a network security group isn’t associated to Subnet3, or the network interface in the virtual machine. All network traffic is allowed through a subnet and network interface if they don’t have a network security group associated to them. Outbound traffic For outbound traffic, Azure processes the rules in a network security group associated to a network interface first, if there’s one, and then the rules in a network security group associated to the subnet, if there’s one. This includes intra-subnet traffic as well.  VM1: The security rules in NSG2 are processed. Unless you create a security rule that denies port 80 outbound to the internet, the traffic is allowed by the AllowInternetOutbound default security rule in both NSG1 and NSG2. If NSG2 has a security rule that denies port 80, the traffic is denied, and never evaluated by NSG1. To deny port 80 from the virtual machine, either, or both of the network security groups must have a rule that denies port 80 to the internet.  VM2: All traffic is sent through the network interface to the subnet, since the network interface attached to VM2 doesn’t have a network security group associated to it. The rules in NSG1 are processed.  VM3: If NSG2 has a security rule that denies port 80, the traffic is denied. If NSG2 has a security rule that allows port 80, then port 80 is allowed outbound to the internet, since a network security group isn’t associated to Subnet2.  VM4: All network traffic is allowed from VM4, because a network security group isn’t associated to the network interface attached to the virtual machine, or to Subnet3. Azure Network Security Group-How it works?  A great choice for safeguarding virtual networks is Microsoft’s Azure Network Security Group (NSG).
  • 43.  Using this application, network administrators may quickly organize, filter, direct, and regulate different network traffic flows.  When building Azure NSG, you can configure various incoming and outgoing rules to permit or disallow particular types of traffic.  If you want to use Azure Network Security Groups, you should build and configure individual rules.  Multiple Azure services’ resources may be included in an Azure virtual network.  The full list is available under Services that may be put into a virtual network.  There can be zero or one network security group configured to each virtual network subnet and network interface in a virtual machine.  As many subnets and network interfaces as you like can be connected to the same network security group. Depending on the circumstances, you can establish whatever rules you like, such as whether the traffic traversing the network is secure and ought to be permitted. Azure Network Security Group Rules  Allow Vnet InBound – This rule allows all hosts within the virtual network (including subnets) to communicate without being blocked.
  • 44.  Allow Azure LoadBalancer InBound – This rule permits an Azure load balancer to communicate with your virtual machine and send heartbeats.  Deny All InBound – This is the deny-all rule, which by default blocks all inbound traffic to the VM and protects it from malicious access outside the Azure Vnet. Application Security Group Normally when you deploy a network security group (NSG) it is either assigned to a NIC or a subnet (preferred). If you deploy that NSG to a subnet then the rules apply to all of the NICs, or virtual machines, in that subnet. This is OK when you’re deploying a new system where you can easily place virtual machines into subnets, and treat each subnet as its own security zone. But in the real world, things aren’t always that clean, and you might need something that allows a more dynamic or flexible means of assigning rules to some machines in a subnet. ASGs are used within a NSG to apply a network security rule to a specific workload or group of VMs — defined by ASG worked as being the “network object” & explicit IP addresses are added to this object. This provides the capability to group VMs into associated groups or workloads, simplifying the NSG rule definition process. Another great use of this is for scalability, creating the virtual machine and assigning the newly created virtual machine to its ASG will provide it with all the NSG rules in place for that specific ASG — zero distribution to your service! ASG Key Points  Azure Security Groups allow us to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses.  ASGs provide the capability of grouping the VMs with monikers and secure our applications by filtering traffic.
  • 45.  By implementing granular security traffic controls, we can improve isolation of workloads and can protect them individually.  If a breach occurs, this method limits the potential impact of lateral exploration of our networks from hackers.  The security definition is simplified when using the ASGs.  We can define application groups by providing a moniker descriptive name that fits our architecture.  We can use it the way we want i.e. for applications, systems, environments, workload types, tiers or even any kind of roles.  We can define a single collection of rules using ASGs and NSGs. We just have to apply a single NSG to our entire virtual network on all subnets.  This way by defining a single NSG gives us the full visibility on all traffic policies and a single place for management. Hence, it reduces the tedious job. Benefits of using ASGs:  We can scale at our own pace. While deploying the VMs, we can make them members of the appropriate ASGs.  If the VM is running more than one workloads, we can simply assign multiple ASGs.  The access is always granted based on workloads.  We don’t have to worry about security definition ever again.  The most important point to be noted is that we can implement a zero-trust model. Meaning, we can limit access to the application flows that are explicitly permitted.  ASGs introduce the ability to deploy multiple applications within the same subnet and also isolate traffic based on ASGs.
  • 46.  With the use of Azure Security Groups, you can reduce the number of Network Security Groups in our subscription.  In some cases, it gets so helpful that you can use a single NSG for multiple subnets of your virtual network. Associate Virtual Machines An application security group is a logical collection of virtual machines (NICs). You join virtual machines to the application security group, and then use the application security group as a source or destination in NSG rules. The Networking blade of virtual machine properties has a new button called Configure The Application Security Groups for each NIC in the virtual machine. If you click this button, a pop- up blade will appear and you can select which (none, one, many) application security groups that this NIC should join, and then click Save to commit the change. A Virtual Machine can be attached to more than one Application Security Group. This helps in cases of multi-application servers.
  • 47. The following requirements apply to the creation and use of ASGs:  All network interfaces used in an ASG must be within the same VNet  If ASGs are used in the source and destination, they must be within the same VNet Creating NSG Rules You now can open an NSG and create inbound or outbound rules that use the application security group as a source or destination, and thus uses the associated virtual machine NICs as
  • 48. sources and destinations. Source and Destination in the new rule blade allow you to select any application security group in the same region. As virtual machines are added, removed or updated the management overhead that is required to maintain the NSG may become quite considerable. This is where ASGs come in to play to simplify the NSG rule creation, and continued maintenance of the rule. Instead of defining IP prefixes, you create an ASG and use the it within the NSG rule. The Azure platform takes care of the rest by determining the IPs that are covered within the ASG. As network interfaces of VMs are added to the ASG, the effective network security rules are applied without the need to update the NSG rule itself.