SlideShare a Scribd company logo
Questions & Answers
(Demo Version - Limited Content)
ISC2
CISSP Exam
Certified Information Systems Security
Professional (CISSP)
https://www.certifiedumps.com/isc2/cissp-dumps.html
Thank you for Downloading CISSP exam PDF Demo
Get Full File:
Questions & Answers PDF
Exam Topics
Topic 1: Exam Pool A
Topic 2: Exam Pool B
Topic 3: Security Architecture and Engineering
Topic 4: Communication and Network Security
Topic 5: Identity and Access Management (IAM)
Topic 6: Security Assessment and Testing
Topic 7: Security Operations
Topic 8: Software Development Security
Topic 9: Exam Set A
Topic 10: Exam Set B
Topic 11: Exam Set C
Topic 12: New Questions B
Topic 13: NEW Questions C
Total Questions (Demo)
Number of Questions
9
6
5
6
4
4
11
6
14
14
12
30
30
150
Page 2
Exam Topics DemoVersion Breakdown
www.certifiedumps.co
m
Questions & Answers PDF
A. determine the risk of a business interruption occurring
B. determine the technological dependence of the business processes
C. Identify the operational impacts of a business interruption
D. Identify the financial impacts of a business interruption
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire
EXCEPT questions that
Explanation:
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of
natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects
information from business process owners and stakeholders about the criticality, dependencies,
recovery objectives, and resources of their processes. The BIA questionnaire should include
questions that:
Identify the operational impacts of a business interruption, such as loss of revenue, customer
satisfaction, reputation, legal obligations, etc.
Identify the financial impacts of a business interruption, such as direct and indirect costs, fines,
penalties, etc.
Determine the technological dependence of the business processes, such as hardware, software,
Page 3
Version: 42.0
Topic 1, Exam Pool A
Question: 1
Answer: A
www.certifiedumps.com
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
A. Examine the device for physical tampering
B. Implement more stringent baseline configurations
C. Purge or re-image the hard disk drive
D. Change access codes
Explanation: Purging or re-imaging the hard disk drive of a laptop before traveling to a high
risk area will reduce
the risk of data compromise or theft in case the laptop is lost, stolen, or seized by
unauthorized
parties. Purging or re-imaging the hard disk drive will erase all the data and applications on
the
laptop, leaving only the operating system and the essential software. This will minimize the
exposure
of sensitive or confidential information that could be accessed by malicious actors. Purging
or re-
imaging the hard disk drive should be done using secure methods that prevent data
recovery, such as
overwriting, degaussing, or physical destruction. The other options will not reduce the risk to
the laptop as effectively as purging or re-imaging the
hard disk drive. Examining the device for physical tampering will only detect if the laptop
has been
compromised after the fact, but will not prevent it from happening. Implementing more
stringent
baseline configurations will improve the security settings and policies of the laptop, but will
not
Questions & Answers PDF Page 4
network, data, etc.
Establish the recovery time objectives (RTO) and recovery point objectives (RPO) for each business
process, which indicate the maximum acceptable downtime and data loss, respectively.
The BIA questionnaire should not include questions that determine the risk of a business
interruption occurring, as this is part of the risk assessment process, which is a separate activity from
the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could
cause a business interruption, and estimates the likelihood and impact of such events. The risk
assessment process also evaluates the existing controls and mitigation strategies, and recommends
additional measures to reduce the risk to an acceptable level.
Question: 2
Answer: C
www.certifiedumps.com
Explanation:
Which of the following represents the GREATEST risk to data confidentiality?
A. Network redundancies are not implemented
B. Security awareness training is not completed
C. Backup tapes are generated unencrypted
D. Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as
it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or
intercepted. Backup tapes are often stored off-site or transported to remote locations,
which increases the chances of them falling into the wrong hands. If the backup tapes are
unencrypted, anyone who obtains them can read the data without any difficulty. Therefore,
backup tapes should always be encrypted using strong algorithms and keys, and the keys
should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup
tapes
unencrypted. Network redundancies are not implemented will affect the availability and
reliability of
the network, but not necessarily the confidentiality of the data. Security awareness training
is not
completed will increase the likelihood of human errors or negligence that could
compromise the
data, but not as directly as generating backup tapes unencrypted. Users have
administrative
privileges will grant users more access and control over the system and the data, but not as
widely as
generating backup tapes unencrypted.
Questions & Answers PDF Page 5
unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they
use other methods, such as booting from a removable media or removing the hard disk drive.
Question: 3
Question: 4
Answer: C
www.certifiedumps.com
Explanation:
When an organization plans to relocate, the most important consideration from a data security
perspective is to conduct a gap analysis of the new facilities against the existing security
requirements. A gap analysis is a process that identifies and evaluates the differences between the
current state and the desired state of a system or a process. In this case, the gap analysis would
compare the security controls and measures implemented in the old and new locations, and identify
any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine
the costs and resources needed to implement the necessary security improvements in the new
facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address
the data security risks associated with relocation. Ensuring the fire prevention and detection systems
are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the
architectural plans to determine how many emergency exits are present is also a safety issue, not a
data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good
practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that
outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC
plan should be updated regularly, not only when relocating.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center,
is preparing a companywide Business Continuity Planning (BCP). Which of the following failures
should the IT manager be concerned with?
Questions & Answers PDF Page 6
What is the MOST important consideration from a data security perspective when an organization
plans to relocate?
A. Ensure the fire prevention and detection systems are sufficient to protect personnel
B. Review the architectural plans to determine how many emergency exits are present
C. Conduct a gap analysis of a new facilities against existing security requirements
D. Revise the Disaster Recovery and Business Continuity (DR/BC) plan
Question: 5
Answer: C
www.certifiedumps.com
A. Application
B. Storage
C. Power
D. Network
Questions & Answers PDF
When assessing an organization’s security policy according to standards established by the
International Organization for Standardization (ISO) 27001 and 27002, when can management
responsibilities be defined?
A. Only when assets are clearly defined
B. Only when standards are defined
Explanation:
A company whose IT services are being delivered from a Tier 4 data center should be most
concerned with application failures when preparing a companywide BCP. A BCP is a document that
describes how an organization will continue its critical business functions in the event of a disruption
or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy,
and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the
Uptime
Institute. A Tier 4 data center has the highest level of availability, reliability, and fault
tolerance, as it
has multiple and independent paths for power and cooling, and redundant and backup
components
for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can
only
experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage,
or network
failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal,
as the
data center can quickly switch to alternative sources or routes. However, a Tier 4 data center
cannot prevent or mitigate application failures, which are caused by
software bugs, configuration errors, or malicious attacks. Application failures can affect the
functionality, performance, or security of the IT services, and cause data loss, corruption, or
breach.
Therefore, the IT manager should be most concerned with application failures when
preparing a BCP,
and ensure that the applications are properly designed, tested, updated, and monitored.
Page 7
Question: 6
Answer: A
www.certifiedumps.com
Explanation:
Questions & Answers PDF
C. Only when controls are put in place
D. Only procedures are defined
Which of the following types of technologies would be the MOST cost-effective method to provide a
When assessing an organization’s security policy according to standards established by the ISO 27001
and 27002, management responsibilities can be defined only when standards are defined. Standards
are the specific rules, guidelines, or procedures that support the implementation of the security
policy. Standards define the minimum level of security that must be achieved by the organization,
and provide the basis for measuring compliance and performance. Standards also assign roles and
responsibilities to different levels of management and staff, and specify the reporting and escalation
procedures.
Management responsibilities are the duties and obligations that managers have to ensure the
effective and efficient execution of the security policy and standards. Management responsibilities
include providing leadership, direction, support, and resources for the security program, establishing
and communicating the security objectives and expectations, ensuring compliance with the legal and
regulatory requirements, monitoring and reviewing the security performance and incidents, and
initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the
framework and criteria for defining what managers need to do and how they need to do it.
Management responsibilities also depend on the scope and complexity of the security policy and
standards, which may vary depending on the size, nature, and context of the organization. Therefore,
standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management
responsibilities. Assets are the resources that need to be protected by the security policy and
standards, but they do not determine the management responsibilities. Controls are the measures
that are implemented to reduce the security risks and achieve the security objectives, but they do
not determine the management responsibilities. Procedures are the detailed instructions that
describe how to perform the security tasks and activities, but they do not determine the
management responsibilities.
Page 8
Question: 7
Answer: B
www.certifiedumps.com
Questions & Answers PDF
reactive control for protecting personnel in public areas?
A. Install mantraps at the building entrances
B. Enclose the personnel entry area with polycarbonate plastic
C. Supply a duress alarm for personnel exposed to the public
D. Hire a guard to protect the public area
Explanation:
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to
provide a reactive control for protecting personnel in public areas. A duress alarm is a device that
allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical
condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code
word. A duress alarm can alert security personnel, law enforcement, or other responders to the
location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive
control because it responds to an incident after it has occurred, rather than preventing it from
happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more
expensive or complex technologies or resources. Installing mantraps at the building entrances is a
preventive control that restricts the access of unauthorized persons to the facility, but it also requires
more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate
plastic is a preventive control that protects the personnel from physical attacks, but it also reduces
the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent
control that discourages potential attackers, but it also involves paying wages, benefits, and training
costs.
An important principle of defense in depth is that achieving information security requires a balanced
focus on which PRIMARY elements?
A. Development, testing, and deployment
Page 9
Question: 8
Answer: C
www.certifiedumps.com
Explanation:
Questions & Answers PDF
B. Prevention, detection, and remediation
C. People, technology, and operations
D. Certification, accreditation, and monitoring
A. Owner’s ability to realize financial gain
B. Owner’s ability to maintain copyright
C. Right of the owner to enjoy their creation
D. Right of the owner to control delivery method
Intellectual property rights are PRIMARY concerned with which of the following?
Explanation:
An important principle of defense in depth is that achieving information security requires a balanced
focus on the primary elements of people, technology, and operations. People are the users,
administrators, managers, and other stakeholders who are involved in the security process. They
need to be aware, trained, motivated, and accountable for their security roles and responsibilities.
Technology is the hardware, software, network, and other tools that are used to implement the
security controls and measures. They need to be selected, configured, updated, and monitored
according to the security standards and best practices. Operations are the policies, procedures,
processes, and activities that are performed to achieve the security objectives and requirements.
They need to be documented, reviewed, audited, and improved continuously to ensure their
effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases,
functions, or outcomes of the security process. Development, testing, and deployment are the
phases of the security life cycle, which describes how security is integrated into the system
development process. Prevention, detection, and remediation are the functions of the security
management, which describes how security is maintained and improved over time. Certification,
accreditation, and monitoring are the outcomes of the security evaluation, which describes how
security is assessed and verified against the criteria and standards.
Page 10
Question: 9
Answer: C
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Explanation:
When assigning ownership of an asset to a department, the most important factor is to ensure
individual accountability for the asset. Individual accountability means that each person who has
access to or uses the asset is responsible for its protection and proper handling. Individual
accountability also implies that each person who causes or contributes to a security breach or
incident involving the asset can be identified and held liable. Individual accountability can be
achieved by implementing security controls such as authentication, authorization, auditing, and
logging.
Which of the following is MOST important when assigning ownership of an asset to a department?
A. The department should report to the business owner
B. Ownership of the asset should be periodically reviewed
C. Individual accountability should be ensured
D. All members should be trained on their responsibilities
Intellectual property rights are primarily concerned with the owner’s ability to realize
financial gain from their creation. Intellectual property is a category of intangible assets
that are the result of human creativity and innovation, such as inventions, designs,
artworks, literature, music, software, etc. Intellectual property rights are the legal rights
that grant the owner the exclusive control over the use, reproduction, distribution, and
modification of their intellectual property. Intellectual property rights aim to protect the
owner’s interests and incentives, and to reward them for their contribution to the society
and economy.
The other options are not the primary concern of intellectual property rights, but rather the
secondary or incidental benefits or aspects of them. The owner’s ability to maintain
copyright is a
means of enforcing intellectual property rights, but not the end goal of them. The right of
the owner
to enjoy their creation is a personal or moral right, but not a legal or economic one. The
right of the
owner to control the delivery method is a specific or technical aspect of intellectual
property rights,
but not a general or fundamental one.
Page 11
Topic 2, Exam Pool B
Question: 10
Answer: C
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Which one of the following affects the classification of data?
A. Assigned security label
B. Multilevel Security (MLS) architecture
C. Minimum query size
D. Passage of time
The other options are not as important as ensuring individual accountability, as they do not directly
address the security risks associated with the asset. The department should report to the business
owner is a management issue, not a security issue. Ownership of the asset should be periodically
reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members
should be trained on their responsibilities is a preventive measure, but it does not guarantee
compliance or enforcement of the responsibilities.
The passage of time is one of the factors that affects the classification of data. Data
classification is the process of assigning a level of sensitivity or criticality to data based on
its value, impact, and legal requirements. Data classification helps to determine the
appropriate security controls and handling procedures for the data. However, data
classification is not static, but dynamic, meaning that it can change over time depending on
various factors. One of these factors is the passage of time, which can affect the relevance,
usefulness, or sensitivity of the data. For example, data that is classified as confidential or
secret at one point in time may become obsolete, outdated, or declassified at a later point
in time, and thus require a lower level of protection. Conversely, data that is classified as
public or unclassified at one point in time may become more valuable, sensitive, or
regulated at a later point in time, and thus require a higher level of protection. Therefore,
data classification should be reviewed and updated periodically to reflect the changes in the
data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or
components of data classification. Assigned security label is the result of data classification, which
indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a
system that supports data classification, which allows different levels of access to data based on the
clearance and need-to-know of the users. Minimum query size is a parameter that can be used to
enforce data classification, which limits the amount of data that can be retrieved or displayed at a
time.
Page 12
Question: 11
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which of the following BEST describes the responsibilities of a data owner?
A. Ensuring quality and validation through periodic audits for ongoing data integrity
B. Maintaining fundamental data availability, including data storage and archiving
C. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
D. Determining the impact the information has on the mission of the organization
Explanation: The best description of the responsibilities of a data owner is determining the
impact the
information has on the mission of the organization. A data owner is a person or entity that
has the
authority and accountability for the creation, collection, processing, and disposal of a set of
data. A
data owner is also responsible for defining the purpose, value, and classification of the
data, as well
as the security requirements and controls for the data. A data owner should be able to
determine the
impact the information has on the mission of the organization, which means assessing the
potential
consequences of losing, compromising, or disclosing the data. The impact of the
information on the
mission of the organization is one of the main criteria for data classification, which helps to
establish
the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the
responsibilities of other roles or functions related to data management. Ensuring quality and
validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who
is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining
fundamental data availability, including data storage and archiving is a responsibility of a data
custodian, who is a person or entity that implements and maintains the technical and physical
security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of
data security is a responsibility of a data controller, who is a person or entity that determines the
purposes and means of processing the data.
Page 13
Question: 12
Answer: D
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from
the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM)
functions, such as user authentication, authorization, provisioning, deprovisioning, password
management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the
organization to streamline and automate the account management process, reduce the workload
and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can
also support the contractors who have limited onsite time, as they can access the organization’s
resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from
the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based
service that provides a platform for developing, testing, and deploying applications, but it does not
manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service
that provides virtual desktops for users to access applications and data, but it does not manage the
user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that
An organization has doubled in size due to a rapid market share increase. The size of the
Information Technology (IT) staff has maintained pace with this growth. The organization
hires several contractors whose onsite time is limited. The IT department has pushed its
limits building servers and rolling out workstations and has a backlog of account
management requests.
Which contract is BEST in offloading the task from the IT staff?
A. Platform as a Service (PaaS)
B. Identity as a Service (IDaaS)
C. Desktop as a Service (DaaS)
D. Software as a Service (SaaS)
Page 14
Question: 13
Answer: B
www.certifiedumps.com
When implementing a data classification program, why is it important to avoid too much granularity?
A. The process will require too many resources
B. It will be difficult to apply to both hardware and software
C. It will be difficult to assign ownership to the data
D. The process will be perceived as having value
Explanation:
When implementing a data classification program, it is important to avoid too much granularity,
because the process will require too many resources. Data classification is the process of
assigning a
level of sensitivity or criticality to data based on its value, impact, and legal requirements.
Data
classification helps to determine the appropriate security controls and handling procedures
for the
data. However, data classification is not a simple or straightforward process, as it involves
many
factors, such as the nature, context, and scope of the data, the stakeholders, the
regulations, and the
standards. If the data classification program has too many levels or categories of data, it will
increase
the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of
the data
protection. Therefore, data classification should be done with a balance between
granularity and
simplicity, and follow the principle of proportionality, which means that the level of
protection
should be proportional to the level of risk. The other options are not the main reasons to
avoid too much granularity in data classification, but
rather the potential challenges or benefits of data classification. It will be difficult to apply
to both
hardware and software is a challenge of data classification, as it requires consistent and
compatible
methods and tools for labeling and protecting data across different types of media and
devices. It will
be difficult to assign ownership to the data is a challenge of data classification, as it
Questions & Answers PDF Page 15
provides software applications for users to use, but it does not manage the user accounts for the
software applications.
Question: 14
Answer: A
www.certifiedumps.com
Explanation:
Questions & Answers PDF
A. system security managers
B. business managers
C. Information Technology (IT) managers
D. end users
In a data classification scheme, the data is owned by the
In a data classification scheme, the data is owned by the business managers. Business
managers are the persons or entities that have the authority and accountability for the
creation, collection, processing, and disposal of a set of data. Business managers are also
responsible for defining the purpose, value, and classification of the data, as well as the
security requirements and controls for the data. Business managers should be able to
determine the impact the information has on the mission of the organization, which means
assessing the potential consequences of losing, compromising, or disclosing the data. The
impact of the information on the mission of the organization is one of the main criteria for
data classification, which helps to establish the appropriate level of protection and handling
for the data.
The other options are not the data owners in a data classification scheme, but rather the
other roles
or functions related to data management. System security managers are the persons or
entities that
oversee the security of the information systems and networks that store, process, and
transmit the
data. They are responsible for implementing and maintaining the technical and physical
security of
the data, as well as monitoring and auditing the security performance and incidents.
Information
Technology (IT) managers are the persons or entities that manage the IT resources and
services that
support the business processes and functions that use the data. They are responsible for
ensuring
the availability, reliability, and scalability of the IT infrastructure and applications, as well as
providing
Page 16
Question: 15
Answer: B
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Which security service is served by the process of encryption plaintext with the sender’s private key
and decrypting cipher text with the sender’s public key?
A. Confidentiality
B. Integrity
C. Identification
D. Availability
The security service that is served by the process of encrypting plaintext with the sender’s private
key and decrypting ciphertext with the sender’s public key is identification. Identification is the
process of verifying the identity of a person or entity that claims to be who or what it is.
Identification can be achieved by using public key cryptography and digital signatures, which are
based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext
with the sender’s public key. This process works as follows:
The sender has a pair of public and private keys, and the public key is shared with the receiver in
advance.
The sender encrypts the plaintext message with its private key, which produces a ciphertext that is
also a digital signature of the message.
The sender sends the ciphertext to the receiver, along with the plaintext message or a hash of the
message.
The receiver decrypts the ciphertext with the sender’s public key, which produces the same plaintext
message or hash of the message.
The receiver compares the decrypted message or hash with the original message or hash, and
verifies the identity of the sender if they match.
Page 17
Topic 3, Security Architecture and Engineering
Question: 16
Answer: C
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Which of the following mobile code security models relies only on trust?
A. Code signing
B. Class authentication
C. Sandboxing
D. Type safety
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the
sender’s public key serves identification because it ensures that only the sender can produce a valid
ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s
identity by using the sender’s public key. This process also provides non-repudiation, which means
that the sender cannot deny sending the message or the receiver cannot deny receiving the
message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext
with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality
is the process of ensuring that the message is only readable by the intended parties, and it is
achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the
receiver’s private key. Integrity is the process of ensuring that the message is not modified or
corrupted during transmission, and it is achieved by using hash functions and message
authentication codes. Availability is the process of ensuring that the message is accessible and usable
by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Page 18
Question: 17
Answer: A
www.certifiedumps.com
Which technique can be used to make an encryption scheme more resistant to a known plaintext
attack?
Questions & Answers PDF Page 19
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of
software that can be transferred from one system to another and executed without installation or
compilation. Mobile code can be used for various purposes, such as web applications, applets,
scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code,
unauthorized access, data leakage, etc. Mobile code security models are the techniques that are
used to protect the systems and users from the threats of mobile code. Code signing is a mobile code
security model that relies only on trust, which means that the security of the mobile code depends
on the reputation and credibility of the code provider. Code signing works as follows:
The code provider has a pair of public and private keys, and obtains a digital certificate from a trusted
third party, such as a certificate authority (CA), that binds the public key to the identity of the code
provider.
The code provider signs the mobile code with its private key and attaches the digital certificate to the
mobile code.
The code consumer receives the mobile code and verifies the signature and the certificate with the
public key of the code provider and the CA, respectively.
The code consumer decides whether to trust and execute the mobile code based on the identity and
reputation of the code provider.
Code signing relies only on trust because it does not enforce any security restrictions or controls on
the mobile code, but rather leaves the decision to the code consumer. Code signing also does not
guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of
the code provider. Code signing can be effective if the code consumer knows and trusts the code
provider, and if the code provider follows the security standards and best practices. However, code
signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if
the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other
techniques that limit or isolate the mobile code. Class authentication is a mobile code security model
that verifies the permissions and capabilities of the mobile code based on its class or type, and
allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security
model that executes the mobile code in a separate and restricted environment, and prevents the
mobile code from accessing or affecting the system resources or data. Type safety is a mobile code
security model that checks the validity and consistency of the mobile code, and prevents the mobile
code from performing illegal or unsafe operations.
Question: 18
www.certifiedumps.com
Explanation:
Questions & Answers PDF
A. Hashing the data before encryption
B. Hashing the data after encryption
C. Compressing the data after encryption
D. Compressing the data before encryption
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
A. Implementation Phase
Compressing the data before encryption is a technique that can be used to make an encryption
scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis
where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key,
and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the
statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess
the key. Compressing the data before encryption can reduce the redundancy and increase the
entropy of the plaintext, making it harder for the attacker to find any correlations or similarities
between the plaintext and the ciphertext. Compressing the data before encryption can also reduce
the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-
ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant
to a known plaintext attack, but rather techniques that can introduce other security issues or
inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way
function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the
original data. Hashing the data after encryption is also not a useful technique, as hashing does not
add any security to the encryption, and the hash can be easily computed by anyone who has access
to the ciphertext. Compressing the data after encryption is not a recommended technique, as
compression algorithms usually work better on uncompressed data, and compressing the ciphertext
can introduce errors or vulnerabilities that can compromise the encryption.
Page 20
Question: 19
Answer: D
www.certifiedumps.com
Explanation:
B. Initialization
Phase C.
Cancellation Phase
D. Issued Phase
Questions & Answers PDF
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the
initialization phase. PKI is a system that uses public key cryptography and digital certificates to
provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI
key/certificate life-cycle management is the process of managing the creation, distribution, usage,
storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-
cycle management consists of six phases: pre-certification, initialization, certification, operational,
suspension, and termination. The initialization phase is the second phase, where the key pair and the
certificate request are generated by the end entity or the registration authority (RA). The
initialization phase involves the following steps:
The end entity or the RA generates a key pair, consisting of a public key and a private key, using a
secure and random method.
The end entity or the RA creates a certificate request, which contains the public key and other
identity information of the end entity, such as the name, email, organization, etc.
The end entity or the RA submits the certificate request to the certification authority (CA), which is
the trusted entity that issues and signs the certificates in the PKI system.
The end entity or the RA securely stores the private key and protects it from unauthorized access,
loss, or compromise.
The other options are not the second phase of PKI key/certificate life-cycle management, but rather
other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management,
but rather a phase of PKI system deployment, where the PKI components and policies are installed
and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management,
but rather a possible outcome of the termination phase, where the key pair and the certificate are
permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle
management, but rather a possible outcome of the certification phase, where the CA verifies and
approves the certificate request and issues the certificate to the end entity or the RA.
Page 21
Question: 20
Answer: B
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Which component of the Security Content Automation Protocol (SCAP) specification contains the
data required to estimate the severity of vulnerabilities identified automated vulnerability
assessments?
A. Common Vulnerabilities and Exposures (CVE)
B. Common Vulnerability Scoring System (CVSS)
C. Asset Reporting Format (ARF)
D. Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the
data required to estimate the severity of vulnerabilities identified by automated vulnerability
assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides
a standardized and objective way to measure and communicate the characteristics and impacts of
vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base
metric group captures the intrinsic and fundamental properties of a vulnerability that are constant
over time and across user environments. The temporal metric group captures the characteristics of a
vulnerability that change over time, such as the availability and effectiveness of exploits, patches,
and workarounds. The environmental metric group captures the characteristics of a vulnerability that
are relevant and unique to a user’s environment, such as the configuration and importance of the
affected system. Each metric group has a set of metrics that are assigned values based on the
vulnerability’s attributes. The values are then combined using a formula to produce a numerical
score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can
also be translated into a qualitative rating that ranges from none to low, medium, high, and critical.
CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and
prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to
estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather
components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component
that provides a standardized and unique identifier and description for each publicly known
vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different
sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and
Page 22
Answer: B
www.certifiedumps.com
Explanation:
What is the purpose of an Internet Protocol (IP) spoofing attack?
A. To send excessive amounts of data to a process, making it unpredictable
B. To intercept network traffic without authorization
C. To disguise the destination address from a target’s IP filtering devices
D. To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is
communicating with a known entity. IP spoofing is a technique that involves creating and sending IP
packets with a forged source IP address, which is usually the IP address of a trusted or authorized
host. IP spoofing can be used for various malicious purposes, such as:
Bypassing IP-based access control lists (ACLs) or firewalls that filter traffic based on the source IP
address.
Launching denial-of-service (DoS) or distributed denial-of-service (DDoS) attacks by flooding a target
system with spoofed packets, or by reflecting or amplifying the traffic from intermediate systems.
Hijacking or intercepting a TCP session by predicting or guessing the sequence numbers and sending
spoofed packets to the legitimate parties.
Questions & Answers PDF Page 23
extensible format for expressing the information about the assets and their characteristics, such as
configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset
information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is
a component that provides a standardized and expressive language for defining and testing the state
of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL
enables the automation and interoperability of vulnerability assessment and management.
Question: 21
Answer: D
Topic 4, Communication and Network Security
www.certifiedumps.com
Explanation:
A. Link layer
B. Physical layer
C. Session layer
D. Application layer
Questions & Answers PDF
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area
Network (SAN) located?
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System
Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is
transmitted and processed across different layers of a network. The OSI model consists of seven
layers: application, presentation, session, transport, network, data link, and physical. The physical
layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of
raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines
the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc.
The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh,
etc.
Gaining unauthorized access to a system or network by impersonating a trusted or authorized host
and exploiting its privileges or credentials.
The purpose of IP spoofing is to convince a system that it is communicating with a known entity,
because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or
methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a
possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network
traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or
intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a
valid option, as IP spoofing involves forging the source address, not the destination address.
Page 24
Question: 22
Answer: B
www.certifiedumps.com
Questions & Answers PDF
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for
negotiating and establishing a connection with another node?
A. Transport layer
B. Application layer
C. Network layer
D. Session layer
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and
block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are
connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN
allows multiple servers or clients to share the same storage devices, and it provides high
performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at
the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the
storage devices, and it is accessed by the servers or clients through the physical medium of the
network infrastructure.
Explanation:
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is
responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a
simplified version of the OSI model, and it consists of four layers: application, transport, internet, and
link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing
reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer
uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to
segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection
and correction, flow control, and congestion control. The transport layer also provides connection-
oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between
two nodes before exchanging data, and it maintains the connection until the data transfer is
complete. TCP uses a three-way handshake to negotiate and establish a connection with another
node. The three-way handshake works as follows:
Page 25
Question: 23
Answer: A
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
A. Layer 2 Tunneling Protocol (L2TP)
B. Link Control Protocol (LCP)
C. Challenge Handshake Authentication Protocol (CHAP)
D. Packet Transfer Protocol (PTP)
The client sends a SYN (synchronize) packet to the server, indicating its initial sequence number and
requesting a connection.
The server responds with a SYN-ACK (synchronize-acknowledge) packet, indicating its initial
sequence number and acknowledging the client’s request.
The client responds with an ACK (acknowledge) packet, acknowledging the server’s response and
completing the connection.
UDP is a connectionless protocol, which means that it does not establish or maintain a connection
between two nodes, but rather sends data packets independently and without any guarantee of
delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and
establish a connection with another node, but rather relies on the application layer to handle any
connection-related issues.
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats.
PPP is a data link layer protocol that provides a standard method for transporting network layer
packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports
various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a
common frame format. PPP also provides features such as authentication, compression, error
detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing,
configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees
on various options and parameters for the PPP link, such as the maximum transmission unit (MTU),
the authentication method, the compression method, the error detection method, and the packet
format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak,
configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-
Page 26
Question: 24
Answer: B
www.certifiedumps.com
Which of the following operates at the Network Layer of the Open System Interconnection (OSI)
model?
A. Packet filtering
B. Port services filtering
C. Content filtering
D. Application access control
Explanation:
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The
OSI model is a conceptual framework that describes how data is transmitted and processed across
different layers of a network. The OSI model consists of seven layers: application, presentation,
session, transport, network, data link, and physical. The network layer is the third layer from the
bottom of the OSI model, and it is responsible for routing and forwarding data packets between
different networks or subnets. The network layer uses logical addresses, such as IP addresses, to
identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or
ARP, to perform the routing and forwarding functions.
Questions & Answers PDF Page 27
reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes.
Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private
networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP
datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine
the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication
Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote
peer before allowing access to the network. CHAP uses a challenge-response mechanism that
involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not
determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP)
is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol
over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and
allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP,
but rather uses it as a payload.
Question: 25
Answer: A
www.certifiedumps.com
Explanation:
Questions & Answers PDF
An input validation and exception handling vulnerability has been discovered on a critical web-based
system. Which of the following is MOST suited to quickly implement a control?
A. Add a new rule to the application layer firewall
B. Block access to the service
C. Install an Intrusion Detection System (IDS)
D. Patch the application source code
Page 28
Packet filtering is a technique that controls the access to a network or a host by inspecting the
incoming and outgoing data packets and applying a set of rules or policies to allow or deny them.
Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at
the network layer of the OSI model. Packet filtering typically examines the network layer header of
the data packets, such as the source and destination IP addresses, the protocol type, or the
fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can
also examine the transport layer header of the data packets, such as the source and destination port
numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies.
Packet filtering can provide a basic level of security and performance for a network or a host, but it
also has some limitations, such as the inability to inspect the payload or the content of the data
packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance
of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather
at other layers. Port services filtering is a technique that controls the access to a network or a host by
inspecting the transport layer header of the data packets and applying a set of rules or policies to
allow or deny them based on the port numbers or the services. Port services filtering operates at the
transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a
technique that controls the access to a network or a host by inspecting the application layer payload
or the content of the data packets and applying a set of rules or policies to allow or deny them based
on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer
of the OSI model, which is the seventh and the topmost layer. Application access control is a
technique that controls the access to a network or a host by inspecting the application layer identity
or the credentials of the users or the processes and applying a set of rules or policies to allow or deny
them based on the roles, permissions, or other attributes. Application access control operates at the
application layer of the OSI model, which is the seventh and the topmost layer.
Question: 26
Answer: A
www.certifiedumps.com
Questions & Answers PDF Page 29
Adding a new rule to the application layer firewall is the most suited to quickly implement a
control
for an input validation and exception handling vulnerability on a critical web-based system.
An input
validation and exception handling vulnerability is a type of vulnerability that occurs when a
web-
based system does not properly check, filter, or sanitize the input data that is received from
the users
or other sources, or does not properly handle the errors or exceptions that are generated
by the
system. An input validation and exception handling vulnerability can lead to various attacks,
such as:
Injection attacks, such as SQL injection, command injection, or cross-site scripting (XSS),
where the
attacker inserts malicious code or commands into the input data that are executed by the
system or
the browser, resulting in data theft, data manipulation, or remote code execution. Buffer
overflow attacks, where the attacker sends more input data than the system can handle,
causing the system to overwrite the adjacent memory locations, resulting in data
corruption, system
crash, or arbitrary code execution. Denial-of-service (DoS) attacks, where the attacker sends
malformed or invalid input data that cause
the system to generate excessive errors or exceptions, resulting in system overload,
resource
exhaustion, or system failure. An application layer firewall is a device or software that
operates at the application layer of the OSI
model and inspects the application layer payload or the content of the data packets. An
application
layer firewall can provide various functions, such as: Filtering the data packets based on the
application layer protocols, such as HTTP, FTP, or SMTP, and
the application layer attributes, such as URLs, cookies, or headers.
Blocking or allowing the data packets based on the predefined rules or policies that specify
the
criteria for the application layer protocols and attributes.
Logging and auditing the data packets for the application layer protocols and attributes.
Modifying or transforming the data packets for the application layer protocols and
attributes.
Adding a new rule to the application layer firewall is the most suited to quickly implement a control
for an input validation and exception handling vulnerability on a critical web-based system, because
it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid
input data that exploit the vulnerability. For example, a new rule can be added to the application
layer firewall to:
Reject or drop the data packets that contain SQL statements, shell commands, or script tags in the
input data, which can prevent or reduce the injection attacks.
Reject or drop the data packets that exceed a certain size or length in the input data, which can
prevent or reduce the buffer overflow attacks.
Reject or drop the data packets that contain malformed or invalid syntax or characters in the input
www.certifiedumps.com
A manufacturing organization wants to establish a Federated Identity Management (FIM) system
with its 20 different supplier companies. Which of the following is the BEST solution for the
manufacturing organization?
A. Trusted third-party certification
B. Lightweight Directory Access Protocol (LDAP)
C. Security Assertion Markup language (SAML)
D. Cross-certification
Questions & Answers PDF Page 30
data, which can prevent or reduce the DoS attacks.
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring
any changes or patches to the web-based system, which can be time-consuming and risky, especially
for a critical system. Adding a new rule to the application layer firewall can also be done remotely
and centrally, without requiring any physical access or installation on the web-based system, which
can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and
exception handling vulnerability on a critical web-based system, but rather options that have other
limitations or drawbacks. Blocking access to the service is not the most suited option, because it can
cause disruption and unavailability of the service, which can affect the business operations and
customer satisfaction, especially for a critical system. Blocking access to the service can also be a
temporary and incomplete solution, as it does not address the root cause of the vulnerability or
prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the
most suited option, because IDS only monitors and detects the attacks, and does not prevent or
respond to them. IDS can also generate false positives or false negatives, which can affect the
accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks,
which can affect the effectiveness and efficiency of the detection. Patching the application source
code is not the most suited option, because it can take a long time and require a lot of resources and
testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the
application source code can also introduce new errors or vulnerabilities, which can affect the
functionality and security of the system. Patching the application source code can also be difficult or
impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of
the patch.
Topic 5, Identity and Access Management (IAM)
Question: 27
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization
that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier
companies. FIM is a process that allows the sharing and recognition of identities across different
organizations that have a trust relationship. FIM enables the users of one organization to access the
resources or services of another organization without having to create or maintain multiple accounts
or credentials. FIM can provide several benefits, such as:
Improving the user experience and convenience by reducing the need for multiple logins and
passwords
Enhancing the security and privacy by minimizing the exposure and duplication of sensitive
information
Increasing the efficiency and productivity by streamlining the authentication and authorization
processes
Reducing the cost and complexity by simplifying the identity management and administration
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and
authorization information between different parties. SAML uses XML-based messages, called
assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML
defines three roles for the parties involved in FIM:
Identity provider (IdP): the party that authenticates the user and issues the SAML assertion
Service provider (SP): the party that provides the resource or service that the user wants to access
User or principal: the party that requests access to the resource or service
SAML works as follows:
The user requests access to a resource or service from the SP
The SP redirects the user to the IdP for authentication
The IdP authenticates the user and generates a SAML assertion that contains the user’s identity,
attributes, and entitlements
The IdP sends the SAML assertion to the SP
The SP validates the SAML assertion and grants or denies access to the user based on the
information in the assertion
Page 31
Answer: C
www.certifiedumps.com
Explanation:
Questions & Answers PDF
Derived credential is the best description of an access control method utilizing cryptographic keys
derived from a smart card private key that is embedded within mobile devices. A smart card is a
device that contains a microchip that stores a private key and a digital certificate that are used for
Which of the following BEST describes an access control method utilizing cryptographic keys derived
from a smart card private key that is embedded within mobile devices?
A. Derived credential
B. Temporary security credential
C. Mobile device credentialing service
D. Digest authentication
SAML is the best solution for the manufacturing organization that wants to establish a FIM system
with its 20 different supplier companies, because it can enable the seamless and secure access to the
resources or services across the different organizations, without requiring the users to create or
maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility
between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to
establish a FIM system with its 20 different supplier companies, but rather solutions that have other
limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such
as a certificate authority (CA), that issues and verifies digital certificates that contain the public key
and identity information of a user or an entity. Trusted third-party certification can provide
authentication and encryption for the communication between different parties, but it does not
provide authorization or entitlement information for the access to the resources or services.
Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of
directory services, such as Active Directory, that store the identity and attribute information of users
and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and
attribute information, but it does not provide a mechanism to exchange or federate the information
across different organizations. Cross-certification is a process that involves two or more CAs that
establish a trust relationship and recognize each other’s certificates. Cross-certification can extend
the trust and validity of the certificates across different domains or organizations, but it does not
provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Page 32
Question: 28
Answer: A
www.certifiedumps.com
Questions & Answers PDF
The user initiates a request to generate a derived credential on the mobile device
The computer or the terminal verifies the smart card certificate with a trusted CA, and generates a
derived credential that contains a cryptographic key and a certificate that are derived from the smart
card private key and certificate
The computer or the terminal transfers the derived credential to the mobile device, and stores it in a
secure element or a trusted platform module on the device
The user disconnects the mobile device from the computer or the terminal, and removes the smart
card from the reader
The user can use the derived credential on the mobile device to authenticate and encrypt the
communication with other parties, without requiring the smart card or the PIN
A derived credential can provide a secure and convenient way to use a mobile device as an
alternative to a smart card for authentication and encryption, as it implements a two-factor
authentication method that combines something the user has (the mobile device) and something
the user is (the biometric feature). A derived credential can also comply with the standards and
policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common
Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic
keys derived from a smart card private key that is embedded within mobile devices, but rather
descriptions of other methods or concepts. Temporary security credential is a method that involves
issuing a short-lived credential, such as a token or a password, that can be used for a limited time or
a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant
access to the users or entities, but it does not involve deriving a cryptographic key from a smart card
authentication and encryption. A smart card is typically inserted into a reader that is attached to a
computer or a terminal, and the user enters a personal identification number (PIN) to unlock the
smart card and access the private key and the certificate. A smart card can provide a high level of
security and convenience for the user, as it implements a two-factor authentication method that
combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as
smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a
derived credential is a solution that allows the user to use a mobile device as an alternative to a
smart card for authentication and encryption. A derived credential is a cryptographic key and a
certificate that are derived from the smart card private key and certificate, and that are stored on the
mobile device. A derived credential works as follows:
The user inserts the smart card into a reader that is connected to a computer or a terminal, and
enters the PIN to unlock the smart card
The user connects the mobile device to the computer or the terminal via a cable, Bluetooth, or Wi-Fi
Page 33
www.certifiedumps.com
Users require access rights that allow them to view the average salary of groups of employees.
Which control would prevent the users from obtaining an individual employee’s salary?
A. Limit access to predefined queries
B. Segregate the database into a small number of partitions each with a separate security level
C. Implement Role Based Access Control (RBAC)
D. Reduce the number of people who have access to the system for statistical purposes
Explanation:
Limiting access to predefined queries is the control that would prevent the users from obtaining an
individual employee’s salary, if they only require access rights that allow them to view the average
salary of groups of employees. A query is a request for information from a database, which can be
expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can
specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and
aggregating the data from the database. A predefined query is a query that has been created and
stored in advance by the database administrator or the data owner, and that can be executed by the
authorized users without any modification. A predefined query can provide several benefits, such as:
Improving the performance and efficiency of the database by reducing the processing time and
resources required for executing the queries
Enhancing the security and confidentiality of the database by restricting the access and exposure of
the sensitive data to the authorized users and purposes
Questions & Answers PDF Page 34
private key. Mobile device credentialing service is a concept that involves providing a service that can
issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords.
Mobile device credentialing service can provide a centralized and standardized way to control the
access of mobile devices, but it does not involve deriving a cryptographic key from a smart card
private key. Digest authentication is a method that involves using a hash function, such as MD5, to
generate a digest or a fingerprint of the user’s credentials, such as the username and password, and
sending it to the server for verification. Digest authentication can provide a more secure way to
authenticate the user than the basic authentication, which sends the credentials in plain text, but it
does not involve deriving a cryptographic key from a smart card private key.
Question: 29
Answer: A
www.certifiedumps.com
What is the BEST approach for controlling access to highly sensitive information when employees
have the same level of security clearance?
A. Audit logs
B. Role-Based Access Control (RBAC)
Questions & Answers PDF Page 35
Increasing the accuracy and reliability of the database by preventing the errors or inconsistencies
that might occur due to the user input or modification of the queries
Reducing the cost and complexity of the database by simplifying the query design and management
Limiting access to predefined queries is the control that would prevent the users from obtaining an
individual employee’s salary, if they only require access rights that allow them to view the average
salary of groups of employees, because it can ensure that the users can only access the data that is
relevant and necessary for their tasks, and that they cannot access or manipulate the data that is
beyond their scope or authority. For example, a predefined query can be created and stored that
calculates and displays the average salary of groups of employees based on certain criteria, such as
department, position, or experience. The users who need to view this information can execute this
predefined query, but they cannot modify it or create their own queries that might reveal the
individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual
employee’s salary, if they only require access rights that allow them to view the average salary of
groups of employees, but rather controls that have other purposes or effects. Segregating the
database into a small number of partitions each with a separate security level is a control that would
improve the performance and security of the database by dividing it into smaller and manageable
segments that can be accessed and processed independently and concurrently. However, this control
would not prevent the users from obtaining an individual employee’s salary, if they have access to
the partition that contains the salary data, and if they can create or modify their own queries.
Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and
permissions of the users based on their roles or functions within the organization, rather than their
identities or attributes. However, this control would not prevent the users from obtaining an
individual employee’s salary, if their roles or functions require them to access the salary data, and if
they can create or modify their own queries. Reducing the number of people who have access to the
system for statistical purposes is a control that would reduce the risk and impact of unauthorized
access or disclosure of the sensitive data by minimizing the exposure and distribution of the data.
However, this control would not prevent the users from obtaining an individual employee’s salary, if
they are among the people who have access to the system, and if they can create or modify their
own queries.
Question: 30
www.certifiedumps.com
Questions & Answers PDF
C. Two-factor authentication
D. Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive
information when employees have the same level of security clearance. The principle of least
privilege is a security concept that states that every user or process should have the minimum
amount of access rights and permissions that are necessary to perform their tasks or functions, and
nothing more. The principle of least privilege can provide several benefits, such as:
Improving the security and confidentiality of the information by limiting the access and exposure of
the sensitive data to the authorized users and purposes
Reducing the risk and impact of unauthorized access or disclosure of the information by minimizing
the attack surface and the potential damage
Increasing the accountability and auditability of the information by tracking and logging the access
and usage of the sensitive data
Enhancing the performance and efficiency of the system by reducing the complexity and overhead of
the access control mechanisms
Applying the principle of least privilege is the best approach for controlling access to highly sensitive
information when employees have the same level of security clearance, because it can ensure that
the employees can only access the information that is relevant and necessary for their tasks or
functions, and that they cannot access or manipulate the information that is beyond their scope or
authority. For example, if the highly sensitive information is related to a specific project or
department, then only the employees who are involved in that project or department should have
access to that information, and not the employees who have the same level of security clearance but
are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information
when employees have the same level of security clearance, but rather approaches that have other
purposes or effects. Audit logs are records that capture and store the information about the events
and activities that occur within a system or a network, such as the access and usage of the sensitive
data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and
analysis of the system or network behavior, and facilitating the investigation and response of the
incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive
information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is
a method that enforces the access rights and permissions of the users based on their roles or
Page 36
Answer: D
www.certifiedumps.com
Explanation:
Operating System (OS) baselines are of greatest assistance to auditors when reviewing
system configurations. OS baselines are standard or reference configurations that
define the desired and secure state of an OS, including the settings, parameters,
patches, and updates. OS baselines can provide several benefits, such as:
Improving the security and compliance of the OS by applying the best practices and
recommendations from the vendors, authorities, or frameworks
Enhancing the performance and efficiency of the OS by optimizing the resources and
functions
Increasing the consistency and uniformity of the OS by reducing the variations and
deviations
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
A. Change management processes
B. User administration procedures
C. Operating System (OS) baselines
D. System backup documentation
Questions & Answers PDF Page 37
functions within the organization, rather than their identities or attributes. RBAC can provide a granular
and dynamic layer of security by defining and assigning the roles and permissions according to the
organizational structure and policies. However, RBAC cannot control the access to highly sensitive
information when employees have the same level of security clearance and the same role or function
within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a
technique that verifies the identity of the users by requiring them to provide two pieces of evidence or
factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card),
or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive
layer of security by preventing unauthorized access to the system or network by the users who do not have
both factors. However, two-factor authentication cannot control the access to highly sensitive information
when employees have the same level of security clearance and the same two factors, but rather rely on othe
criteria or mechanisms.
Topic 6, Security Assessment and Testing
Question: 31
Answer: C
www.certifiedumps.com
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong
isolation. What MUST an administrator review to audit a user’s access to data files?
A. Host VM monitor audit logs
B. Guest OS access controls
C. Host VM access controls
D. Guest OS audit logs
Questions & Answers PDF Page 38
Facilitating the monitoring and auditing of the OS by providing a baseline for comparison and
measurement
OS baselines are of greatest assistance to auditors when reviewing system configurations, because
they can enable the auditors to evaluate and verify the current and actual state of the OS against the
desired and secure state of the OS. OS baselines can also help the auditors to identify and report any
gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or
preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations,
but rather of assistance for other purposes or aspects. Change management processes are processes
that ensure that any changes to the system configurations are planned, approved, implemented, and
documented in a controlled and consistent manner. Change management processes can improve the
security and reliability of the system configurations by preventing or reducing the errors, conflicts, or
disruptions that might occur due to the changes. However, change management processes are not of
greatest assistance to auditors when reviewing system configurations, because they do not define
the desired and secure state of the system configurations, but rather the procedures and controls for
managing the changes. User administration procedures are procedures that define the roles,
responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and
access rights. User administration procedures can enhance the security and accountability of the user
accounts and access rights by enforcing the principles of least privilege, separation of duties, and
need to know. However, user administration procedures are not of greatest assistance to auditors
when reviewing system configurations, because they do not define the desired and secure state of
the system configurations, but rather the rules and tasks for administering the users. System backup
documentation is documentation that records the information and details about the system backup
processes, such as the backup frequency, type, location, retention, and recovery. System backup
documentation can increase the availability and resilience of the system by ensuring that the system
data and configurations can be restored in case of a loss or damage. However, system backup
documentation is not of greatest assistance to auditors when reviewing system configurations,
because it does not define the desired and secure state of the system configurations, but rather the
backup and recovery of the system configurations.
Question: 32
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a
VM environment that has five guest OS and provides strong isolation. A VM environment is a system
that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS
and applications. A VM environment can provide several benefits, such as:
Improving the utilization and efficiency of the physical resources by sharing them among multiple
VMs
Enhancing the security and isolation of the VMs by preventing or limiting the interference or
communication between them
Increasing the flexibility and scalability of the VMs by allowing them to be created, modified,
deleted, or migrated easily and quickly
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the
physical
machine. A guest OS can have its own security controls and mechanisms, such as access
controls,
encryption, authentication, and audit logs. Audit logs are records that capture and store the
information about the events and activities that occur within a system or a network, such as
the
access and usage of the data files. Audit logs can provide a reactive and detective layer of
security by
enabling the monitoring and analysis of the system or network behavior, and facilitating the
investigation and response of the incidents. Guest OS audit logs are what an administrator
must review to audit a user’s access to data files in a
VM environment that has five guest OS and provides strong isolation, because they can
provide the
most accurate and relevant information about the user’s actions and interactions with the
data files
on the VM. Guest OS audit logs can also help the administrator to identify and report any
unauthorized or suspicious access or disclosure of the data files, and to recommend or
implement
any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to
data files in a
VM environment that has five guest OS and provides strong isolation, but rather what an
administrator might review for other purposes or aspects. Host VM monitor audit logs are
records
that capture and store the information about the events and activities that occur on the
host VM
monitor, which is the software or hardware component that manages and controls the VMs
Page 39
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which of the following is a PRIMARY benefit of using a formalized security testing report format and
structure?
A. Executive audiences will understand the outcomes of testing and most appropriate next steps for
corrective actions to be taken
B. Technical teams will understand the testing objectives, testing strategies applied, and business risk
associated with each vulnerability
C. Management teams will understand the testing objectives and reputational risk to the
organization
D. Technical and management teams will better understand the testing objectives, results of each
test phase, and potential impact levels
Explanation:
Technical and management teams will better understand the testing objectives, results of each test
phase, and potential impact levels is the primary benefit of using a formalized security testing report
format and structure. Security testing is a process that involves evaluating and verifying the security
posture, vulnerabilities, and threats of a system or a network, using various methods and techniques,
such as vulnerability assessment, penetration testing, code review, and compliance checks. Security
testing can provide several benefits, such as:
Improving the security and risk management of the system or network by identifying and addressing
the security weaknesses and gaps
Enhancing the security and decision making of the system or network by providing the evidence and
information for the security analysis, evaluation, and reporting
Increasing the security and improvement of the system or network by providing the feedback and
input for the security response, remediation, and optimization
Page 40
what an administrator must configure and implement to protect the data files. Host VM
access controls are rules and mechanisms that regulate and restrict the access and permissions
of the users and processes to the VMs on the physical machine. Host VM access controls can
provide a granular and dynamic layer of security by defining and assigning the roles and
permissions according to the organizational structure and policies. However, host VM access
controls are not what an administrator must review to audit a user’s access to data files, but
rather what an administrator must configure and implement to protect the VMs.
Question: 33
Answer: D
www.certifiedumps.com
Questions & Answers PDF Page 41
A security testing report is a document that summarizes and communicates the findings and
recommendations of the security testing process to the relevant stakeholders, such as the technical
and management teams. A security testing report can have various formats and structures,
depending on the scope, purpose, and audience of the report. However, a formalized security testing
report format and structure is one that follows a standard and consistent template, such as the one
proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-
115, Technical Guide to Information Security Testing and Assessment. A formalized security testing
report format and structure can have several components, such as:
Executive summary: a brief overview of the security testing objectives, scope, methodology, results,
and conclusions
Introduction: a detailed description of the security testing background, purpose, scope, assumptions,
limitations, and constraints
Methodology: a detailed explanation of the security testing approach, techniques, tools, and
procedures
Results: a detailed presentation of the security testing findings, such as the vulnerabilities, threats,
risks, and impact levels, organized by test phases or categories
Recommendations: a detailed proposal of the security testing suggestions, such as the remediation,
mitigation, or prevention strategies, prioritized by impact levels or risk ratings
Conclusion: a brief summary of the security testing outcomes, implications, and future steps
Technical and management teams will better understand the testing objectives, results of each test
phase, and potential impact levels is the primary benefit of using a formalized security testing report
format and structure, because it can ensure that the security testing report is clear, comprehensive,
and consistent, and that it provides the relevant and useful information for the technical and
management teams to make informed and effective decisions and actions regarding the system or
network security.
The other options are not the primary benefits of using a formalized security testing report format
and structure, but rather secondary or specific benefits for different audiences or purposes.
Executive audiences will understand the outcomes of testing and most appropriate next steps for
corrective actions to be taken is a benefit of using a formalized security testing report format and
structure, but it is not the primary benefit, because it is more relevant for the executive summary
component of the report, which is a brief and high-level overview of the report, rather than the
entire report. Technical teams will understand the testing objectives, testing strategies applied, and
business risk associated with each vulnerability is a benefit of using a formalized security testing
report format and structure, but it is not the primary benefit, because it is more relevant for the
methodology and results components of the report, which are more technical and detailed parts of
the report, rather than the entire report. Management teams will understand the testing objectives
and reputational risk to the organization is a benefit of using a formalized security testing report
www.certifiedumps.com
Questions & Answers PDF
Which of the following could cause a Denial of Service (DoS) against an authentication system?
A. Encryption of audit logs
B. No archiving of audit logs
C. Hashing of audit logs
D. Remote access audit logs
Explanation:
Remote access audit logs could cause a Denial of Service (DoS) against an authentication
system. A
DoS attack is a type of attack that aims to disrupt or degrade the availability or performance
of a
system or a network by overwhelming it with excessive or malicious traffic or requests. An
authentication system is a system that verifies the identity and credentials of the users or
entities
that want to access the system or network resources or services. An authentication system
can use
various methods or factors to authenticate the users or entities, such as passwords, tokens,
certificates, biometrics, or behavioral patterns. Remote access audit logs are records that
capture and store the information about the events and
activities that occur when the users or entities access the system or network remotely, such
as via
the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective
layer of
security by enabling the monitoring and analysis of the remote access behavior, and
facilitating the
investigation and response of the incidents. Remote access audit logs could cause a DoS
against an authentication system, because they could
consume a large amount of disk space, memory, or bandwidth on the authentication
system,
especially if the remote access is frequent, intensive, or malicious. This could affect the
performance
or functionality of the authentication system, and prevent or delay the legitimate users or
entities
from accessing the system or network resources or services. For example, an attacker could
format and structure, but it is not the primary benefit, because it is more relevant for the
introduction and conclusion components of the report, which are more contextual and strategic parts
of the report, rather than the entire report.
Page 42
Question: 34
Answer: D
www.certifiedumps.com
An organization is found lacking the ability to properly establish performance indicators for its Web
hosting solution during an audit. What would be the MOST probable cause?
A. Absence of a Business Intelligence (BI) solution
B. Inadequate cost modeling
C. Improper deployment of the Service-Oriented Architecture (SOA)
D. Insufficient Service Level Agreement (SLA)
Explanation:
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to
lack the ability to properly establish performance indicators for its Web hosting solution during an
audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for
Questions & Answers PDF Page 43
rather the factors that could improve or protect the authentication system. Encryption of audit logs is
a technique that involves using a cryptographic algorithm and a key to transform the audit logs into
an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties.
Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing
unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption
of audit logs could not cause a DoS against an authentication system, because it does not affect the
availability or performance of the authentication system, but rather the integrity or privacy of the
audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit
logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving
of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or
damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving
of audit logs could not cause a DoS against an authentication system, because it does not affect the
availability or performance of the authentication system, but rather the availability or preservation of
the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5
or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the
audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying
the authenticity or consistency of the audit logs, and detecting any modification or tampering of the
audit logs. However, hashing of audit logs could not cause a DoS against an authentication system,
because it does not affect the availability or performance of the authentication system, but rather the
integrity or verification of the audit logs.
Topic 7, Security Operations
Question: 35
Answer: D
www.certifiedumps.com
Questions & Answers PDF Page 44
hosting and maintaining a website or a web application on the internet. A Web hosting solution can
offer various benefits, such as:
Improving the availability and accessibility of the website or web application by ensuring that it is
online and reachable at all times
Enhancing the performance and scalability of the website or web application by optimizing the
speed, load, and capacity of the web server
Increasing the security and reliability of the website or web application by providing the backup,
recovery, and protection of the web data and content
Reducing the cost and complexity of the website or web application by outsourcing the web hosting
and management to a third-party provider
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations,
responsibilities, and obligations of the parties involved in a service, such as the service provider and
the service consumer. An SLA can include various components, such as:
Service description: a detailed explanation of the scope, purpose, and features of the service
Service level objectives: a set of measurable and quantifiable goals or targets for the service quality,
performance, and availability
Service level indicators: a set of metrics or parameters that are used to monitor and evaluate the
service level objectives
Service level reporting: a process that involves collecting, analyzing, and communicating the service
level indicators and objectives
Service level penalties: a set of consequences or actions that are applied when the service level
objectives are not met or violated
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly
establish performance indicators for its Web hosting solution during an audit, because it could mean
that the SLA does not include or specify the appropriate service level indicators or objectives for the
Web hosting solution, or that the SLA does not provide or enforce the adequate service level
reporting or penalties for the Web hosting solution. This could affect the ability of the organization to
measure and assess the Web hosting solution quality, performance, and availability, and to identify
and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly
establish performance indicators for its Web hosting solution during an audit, but rather the factors
that could affect or improve the Web hosting solution in other ways. Absence of a Business
Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and
utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or
www.certifiedumps.com
Questions & Answers PDF
Which of the following types of business continuity tests includes assessment of resilience to
internal and external risks without endangering live operations?
conversion. A BI solution is a system that involves the collection, integration, processing, and
presentation of the data and information from various sources, such as the Web hosting solution, to
support the decision making and planning of the organization. However, absence of a BI solution is
not the most probable cause for an organization to lack the ability to properly establish performance
indicators for its Web hosting solution during an audit, because it does not affect the definition or
specification of the performance indicators for the Web hosting solution, but rather the analysis or
usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a
factor that could affect the ability of the organization to estimate and optimize the cost and value of
the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment.
A cost model is a tool or a method that helps the organization to calculate and compare the cost and
value of the Web hosting solution, and to identify and implement the best or most efficient Web
hosting solution. However, inadequate cost modeling is not the most probable cause for an
organization to lack the ability to properly establish performance indicators for its Web hosting
solution during an audit, because it does not affect the definition or specification of the performance
indicators for the Web hosting solution, but rather the estimation or optimization of the cost and
value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA)
is a factor that could affect the ability of the organization to design and develop the Web hosting
solution, such as the web services, components, or interfaces. A SOA is a software architecture that
involves the modularization, standardization, and integration of the software components or services
that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits,
such as:
Improving the flexibility and scalability of the Web hosting solution by allowing the addition,
modification, or removal of the software components or services without affecting the whole Web
hosting solution
Enhancing the interoperability and compatibility of the Web hosting solution by enabling the
communication and interaction of the software components or services across different platforms
and technologies
Increasing the reusability and maintainability of the Web hosting solution by reducing the duplication
and complexity of the software components or services
However, improper deployment of the SOA is not the most probable cause for an organization to lack
the ability to properly establish performance indicators for its Web hosting solution during an audit,
because it does not affect the definition or specification of the performance indicators for the Web
hosting solution, but rather the design or development of the Web hosting solution.
Page 45
Question: 36
www.certifiedumps.com
Questions & Answers PDF
A. Walkthrough
B. Simulation
C. Parallel
D. White box
Explanation:
Simulation is the type of business continuity test that includes assessment of resilience to internal
and external risks without endangering live operations. Business continuity is the ability of an
organization to maintain or resume its critical functions and operations in the event of a disruption or
disaster. Business continuity testing is the process of evaluating and validating the effectiveness and
readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various
methods and scenarios. Business continuity testing can provide several benefits, such as:
Improving the confidence and competence of the organization and its staff in handling a disruption
or disaster
Enhancing the performance and efficiency of the organization and its systems in recovering from a
disruption or disaster
Increasing the compliance and alignment of the organization and its plans with the internal or
external requirements and standards
Facilitating the monitoring and improvement of the organization and its plans by identifying and
addressing any gaps, issues, or risks
There are different types of business continuity tests, depending on the scope, purpose, and
complexity of the test. Some of the common types are:
Walkthrough: a type of business continuity test that involves reviewing and discussing the BCP and
DRP with the relevant stakeholders, such as the business continuity team, the management, and the
staff. A walkthrough can provide a basic and qualitative assessment of the BCP and DRP, and can help
to familiarize and educate the stakeholders with the plans and their roles and responsibilities.
Simulation: a type of business continuity test that involves performing and practicing the BCP and
DRP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a
power outage, or a cyberattack. A simulation can provide a realistic and quantitative assessment of
the BCP and DRP, and can help to test and train the stakeholders with the plans and their actions and
reactions.
Parallel: a type of business continuity test that involves activating and operating the alternate site or
system, while maintaining the normal operations at the primary site or system. A parallel test can
Page 46
Answer: B
www.certifiedumps.com
Questions & Answers PDF
What is the PRIMARY reason for implementing change management?
A. Certify and approve releases to the environment
B. Provide version rollbacks for system changes
C. Ensure that all applications are approved
D. Ensure accountability for changes to the environment
Explanation:
Ensuring accountability for changes to the environment is the primary reason for implementing
change management. Change management is a process that ensures that any changes to the system
or network environment, such as the hardware, software, configuration, or documentation, are
planned, approved, implemented, and documented in a controlled and consistent manner. Change
provide a comprehensive and comparative assessment of the BCP and DRP, and can help to verify
and validate the functionality and compatibility of the alternate site or system.
Full interruption: a type of business continuity test that involves shutting down and transferring the
normal operations from the primary site or system to the alternate site or system. A full interruption
test can provide a conclusive and definitive assessment of the BCP and DRP, and can help to evaluate
and measure the impact and effectiveness of the plans.
Simulation is the type of business continuity test that includes assessment of resilience to internal
and external risks without endangering live operations, because it can simulate various types of
risks, such as natural, human, or technical, and assess how the organization and its systems can cope
and recover from them, without actually causing any harm or disruption to the live operations.
Simulation can also help to identify and mitigate any potential risks that might affect the live
operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience
to internal and external risks without endangering live operations, but rather types that have other
objectives or effects. Walkthrough is a type of business continuity test that does not include
assessment of resilience to internal and external risks, but rather a review and discussion of the BCP
and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does
not endanger live operations, but rather maintains them, while activating and operating the
alternate site or system. Full interruption is a type of business continuity test that does endanger live
operations, by shutting them down and transferring them to the alternate site or system.
Page 47
Question: 37
Answer: D
www.certifiedumps.com
Which of the following is a PRIMARY advantage of using a third-party identity service?
A. Consolidation of multiple providers
B. Directory synchronization
C. Web based logon
D. Automated account management
Questions & Answers PDF Page 48
management can provide several benefits, such as:
Improving the security and reliability of the system or network environment by preventing or
reducing the errors, conflicts, or disruptions that might occur due to the changes
Enhancing the performance and efficiency of the system or network environment by optimizing the
resources and functions
Increasing the compliance and alignment of the system or network environment with the internal or
external requirements and standards
Facilitating the monitoring and improvement of the system or network environment by tracking and
logging the changes and their outcomes
Ensuring accountability for changes to the environment is the primary reason for implementing
change management, because it can ensure that the changes are authorized, justified, and traceable,
and that the parties involved in the changes are responsible and accountable for their actions and
results. Accountability can also help to deter or detect any unauthorized or malicious changes that
might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather
secondary or specific reasons for different aspects or phases of change management. Certifying and
approving releases to the environment is a reason for implementing change management, but it is
more relevant for the approval phase of change management, which is the phase that involves
reviewing and validating the changes and their impacts, and granting or denying the permission to
proceed with the changes. Providing version rollbacks for system changes is a reason for
implementing change management, but it is more relevant for the implementation phase of change
management, which is the phase that involves executing and monitoring the changes and their
effects, and providing the backup and recovery options for the changes. Ensuring that all applications
are approved is a reason for implementing change management, but it is more relevant for the
application changes, which are the changes that affect the software components or services that
provide the functionality or logic of the system or network environment.
Question: 38
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Consolidation of multiple providers is the primary advantage of using a third-party identity service.
A third-party identity service is a service that provides identity and access management
(IAM)
functions, such as authentication, authorization, and federation, for multiple applications or
systems,
using a single identity provider (IdP). A third-party identity service can offer various
benefits, such as:
Improving the user experience and convenience by allowing the users to access multiple
applications
or systems with a single sign-on (SSO) or a federated identity
Enhancing the security and compliance by applying the consistent and standardized IAM
policies and
controls across multiple applications or systems
Increasing the scalability and flexibility by enabling the integration and interoperability of
multiple
applications or systems with different platforms and technologies
Reducing the cost and complexity by outsourcing the IAM functions to a third-party
provider, and
avoiding the duplication and maintenance of multiple IAM systems
Consolidation of multiple providers is the primary advantage of using a third-party identity
service,
because it can simplify and streamline the IAM architecture and processes, by reducing the
number
of IdPs and IAM systems that are involved in managing the identities and access for multiple
applications or systems. Consolidation of multiple providers can also help to avoid the
issues or risks
that might arise from having multiple IdPs and IAM systems, such as the inconsistency,
redundancy,
or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of
the IAM
functions. The other options are not the primary advantages of using a third-party identity
service, but rather
secondary or specific advantages for different aspects or scenarios of using a third-party
identity
service. Directory synchronization is an advantage of using a third-party identity service,
but it is
more relevant for the scenario where the organization has an existing directory service,
such as LDAP
or Active Directory, that stores and manages the user accounts and attributes, and wants to
Page 49
Question: 39
Answer: D
www.certifiedumps.com
Explanation:
Monitoring of a control should occur at a rate concurrent with the volatility of the security control
when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process
that involves maintaining the ongoing awareness of the security status, events, and activities of a
system or network, by collecting, analyzing, and reporting the security data and information, using
various methods and tools. ISCM can provide several benefits, such as:
Improving the security and risk management of the system or network by identifying and addressing
the security weaknesses and gaps
Enhancing the security and decision making of the system or network by providing the evidence and
information for the security analysis, evaluation, and reporting
Increasing the security and improvement of the system or network by providing the feedback and
input for the security response, remediation, and optimization
Facilitating the compliance and alignment of the system or network with the internal or external
requirements and standards
A security control is a measure or mechanism that is implemented to protect the system or
network
from the security threats or risks, by preventing, detecting, or correcting the security
incidents or
impacts. A security control can have various types, such as administrative, technical, or
physical, and
various attributes, such as preventive, detective, or corrective. A security control can also
have
different levels of volatility, which is the degree or frequency of change or variation of the
security
control, due to various factors, such as the security requirements, the threat landscape, or
the
system or network environment. Monitoring of a control should occur at a rate concurrent
Questions & Answers PDF Page 50
With what frequency should monitoring of a control occur when implementing Information Security
Continuous Monitoring (ISCM) solutions?
A. Continuously without exception for all security controls
B. Before and after each change of the control
C. At a rate concurrent with the volatility of the security control
D. Only during system implementation and decommissioning
Answer: C
www.certifiedumps.com
Explanation:
What should be the FIRST action to protect the chain of evidence when a desktop computer is
involved?
A. Take the computer to a forensic lab
B. Make a copy of the hard drive
C. Start documenting
D. Turn off the computer
Questions & Answers PDF Page 51
report any issues or risks that might affect the security control. Monitoring of a control at a rate
concurrent with the volatility of the security control can also help to optimize the ISCM resources and
efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing
ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or
inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an
incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not
feasible or necessary to monitor all security controls at the same and constant rate, regardless of
their volatility or importance. Continuously monitoring all security controls without exception might
cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might
overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before
and after each change of the control is an incorrect frequency for monitoring of a control when
implementing ISCM solutions, because it is not sufficient or timely to monitor the security control
only when there is a change of the security control, and not during the normal operation of the
security control. Monitoring the security control only before and after each change might cause the
ISCM solutions to miss or ignore the security status, events, and activities that occur between the
changes of the security control, and might delay or hinder the ISCM solutions from detecting and
responding to the security issues or incidents that affect the security control. Only during system
implementation and decommissioning is an incorrect frequency for monitoring of a control when
implementing ISCM solutions, because it is not appropriate or effective to monitor the security
control only during the initial or final stages of the system or network lifecycle, and not during the
operational or maintenance stages of the system or network lifecycle. Monitoring the security
control only during system implementation and decommissioning might cause the ISCM solutions to
neglect or overlook the security status, events, and activities that occur during the regular or ongoing
operation of the system or network, and might prevent or limit the ISCM solutions from improving
and optimizing the security control.
Question: 40
Answer: B
www.certifiedumps.com
What is the MOST important step during forensic analysis when trying to learn the purpose of an
unknown application?
A. Disable all unnecessary services
B. Ensure chain of custody
Questions & Answers PDF Page 52
Making a copy of the hard drive should be the first action to protect the chain of evidence when a
desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that
documents and preserves the integrity and authenticity of the evidence collected from a crime
scene, such as a desktop computer. A chain of evidence should include information such as:
The identity and role of the person who collected, handled, or transferred the evidence
The date and time of the collection, handling, or transfer of the evidence
The location and condition of the evidence
The method and tool used to collect, handle, or transfer the evidence
The signature or seal of the person who collected, handled, or transferred the evidence
Making a copy of the hard drive should be the first action to protect the chain of evidence when a
desktop computer is involved, because it can ensure that the original hard drive is not altered,
damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and
admissible source of evidence. Making a copy of the hard drive should also involve using a write
blocker, which is a device or a software that prevents any modification or deletion of the data on the
hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the
integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer
is involved, but rather actions that should be done after or along with making a copy of the hard
drive. Taking the computer to a forensic lab is an action that should be done after making a copy of
the hard drive, because it can ensure that the computer is transported and stored in a secure and
controlled environment, and that the forensic analysis is conducted by qualified and authorized
personnel. Starting documenting is an action that should be done along with making a copy of the
hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout
the forensic process, and that the evidence can be traced and verified. Turning off the computer is an
action that should be done after making a copy of the hard drive, because it can ensure that the
computer is powered down and disconnected from any network or device, and that the computer is
protected from any further damage or tampering.
Question: 41
www.certifiedumps.com
Questions & Answers PDF
C. Prepare another backup of the system
D. Isolate the system from the network
Explanation:
Isolating the system from the network is the most important step during forensic analysis when
trying to learn the purpose of an unknown application. An unknown application is an application that
is not recognized or authorized by the system or network administrator, and that may have been
installed or executed without the user’s knowledge or consent. An unknown application may have
various purposes, such as:
Providing a legitimate or useful function or service for the user, such as a utility or a tool
Providing an illegitimate or malicious function or service for the attacker, such as a malware or a
backdoor
Providing a neutral or benign function or service for the developer, such as a trial or a demo
Forensic analysis is a process that involves examining and investigating the system or network for any
evidence or traces of the unknown application, such as its origin, nature, behavior, and impact.
Forensic analysis can provide several benefits, such as:
Identifying and classifying the unknown application as legitimate, malicious, or neutral
Determining and assessing the purpose and function of the unknown application
Detecting and resolving any issues or risks caused by the unknown application
Preventing and mitigating any future incidents or attacks involving the unknown application
Isolating the system from the network is the most important step during forensic analysis when
trying to learn the purpose of an unknown application, because it can ensure that the system is
isolated and protected from any external or internal influences or interferences, and that the forensic
analysis is conducted in a safe and controlled environment. Isolating the system from the network
can also help to:
Prevent the unknown application from communicating or connecting with any other system or
network, and potentially spreading or escalating the attack
Prevent the unknown application from receiving or sending any commands or data, and potentially
altering or deleting the evidence
Prevent the unknown application from detecting or evading the forensic analysis, and potentially
hiding or destroying itself
Page 53
Answer: D
www.certifiedumps.com
Questions & Answers PDF
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
A. Guaranteed recovery of all business functions
B. Minimization of the need decision making during a crisis
C. Insurance against litigation following a disaster
D. Protection from loss of organization resources
Explanation:
Minimization of the need for decision making during a crisis is the main benefit that a Business
Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies,
procedures, and resources that enable an organization to continue or resume its critical functions
and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such
as:
Improving the resilience and preparedness of the organization and its staff in handling a disruption or
disaster
Enhancing the performance and efficiency of the organization and its systems in recovering from a
disruption or disaster
Increasing the compliance and alignment of the organization and its plans with the internal or
external requirements and standards
Facilitating the monitoring and improvement of the organization and its plans by identifying and
addressing any gaps, issues, or risks
The other options are not the most important steps during forensic analysis when trying to learn the
purpose of an unknown application, but rather steps that should be done after or along with
isolating the system from the network. Disabling all unnecessary services is a step that should be
done after isolating the system from the network, because it can ensure that the system is optimized
and simplified for the forensic analysis, and that the system resources and functions are not
consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step
that should be done along with isolating the system from the network, because it can ensure that the
integrity and authenticity of the evidence are maintained and documented throughout the forensic
process, and that the evidence can be traced and verified. Preparing another backup of the system is
a step that should be done after isolating the system from the network, because it can ensure that
the system data and configuration are preserved and replicated for the forensic analysis, and that the
system can be restored and recovered in case of any damage or loss.
Page 54
Question: 42
Answer: B
www.certifiedumps.com
When is a Business Continuity Plan (BCP) considered to be valid?
A. When it has been validated by the Business Continuity (BC) manager
B. When it has been validated by the board of directors
C. When it has been validated by all threat scenarios
D. When it has been validated by realistic exercises
Questions & Answers PDF Page 55
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will
provide, because it can ensure that the organization and its staff have a clear and consistent guidance
and direction on how to respond and act during a disruption or disaster, and avoid any confusion,
uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to
reduce the stress and pressure on the organization and its staff during a crisis, and increase their
confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect
expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a
benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business
functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged.
A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may
have to suspend or terminate the less critical or non-essential business functions. Insurance against
litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a
guarantee or protection that the organization will not face any legal or regulatory consequences or
liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the
organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or
regulatory risks, and may have to comply with or report to the relevant authorities or parties.
Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it
is not a prevention or avoidance of any damage or destruction of the organization’s assets or
resources during a disruption or disaster, especially if the disruption or disaster is physical or natural.
A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have
to incur some costs or losses.
Question: 43
www.certifiedumps.com
Questions & Answers PDF
Explanation:
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic
exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the
organization’s critical business functions and processes during and after a disruption or disaster. A
BCP should include various components, such as:
Business impact analysis: a process that identifies and prioritizes the critical business functions and
processes, and assesses the potential impacts and risks of a disruption or disaster on them
Recovery strategies: a process that defines and selects the appropriate methods and resources to
recover the critical business functions and processes, such as alternate sites, backup systems, or
recovery teams
BCP document: a document that outlines and details the scope, purpose, and features of the BCP,
such as the roles and responsibilities, the recovery procedures, and the contact information
Testing, training, and exercises: a process that evaluates and validates the effectiveness and
readiness of the BCP, and educates and trains the relevant stakeholders, such as the staff, the
management, and the customers, on the BCP and their roles and responsibilities
Maintenance and review: a process that monitors and updates the BCP, and addresses any changes
or issues that might affect the BCP, such as the business requirements, the threat landscape, or the
feedback and lessons learned
A BCP is considered to be valid when it has been validated by realistic exercises, because it can
ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and
objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that
involve performing and practicing the BCP with the relevant stakeholders, using simulated or
hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can
provide several benefits, such as:
Improving the confidence and competence of the organization and its staff in handling a disruption
or disaster
Enhancing the performance and efficiency of the organization and its systems in recovering from a
disruption or disaster
Increasing the compliance and alignment of the organization and its plans with the internal or
external requirements and standards
Facilitating the monitoring and improvement of the organization and its plans by identifying and
addressing any gaps, issues, or risks
Page 56
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the
following?
A. Hardware and software compatibility issues
B. Applications’ critically and downtime tolerance
C. Budget constraints and requirements
D. Cost/benefit analysis and business objectives
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties
that are involved in developing or approving a BCP. When it has been validated by the Business
Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is
involved in developing a BCP. The BC manager is the person who is responsible for overseeing and
coordinating the BCP activities and processes, such as the business impact analysis, the recovery
strategies, the BCP document, the testing, training, and exercises, and the maintenance and review.
The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes,
and ensuring that they meet the BCP standards and objectives. However, the validation by the BC
manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in
a realistic scenario. When it has been validated by the board of directors is not a criterion for
considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of
directors is the group of people who are elected by the shareholders to represent their interests and
to oversee the strategic direction and governance of the organization. The board of directors can
approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the
necessary resources and funds for the BCP. However, the approval by the board of directors is not
enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic
scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to
be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a
description or a simulation of a possible or potential disruption or disaster that might affect the
organization’s critical business functions and processes, such as a natural hazard, a human error, or a
technical failure. A threat scenario can be used to test and validate the BCP by measuring and
evaluating the BCP’s performance and effectiveness in responding and recovering from the
disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat
scenarios, as there are too many or unknown threat scenarios that might occur, and some threat
scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated
by the most likely or relevant threat scenarios, and not by all threat scenarios.
Page 57
Question: 44
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit
analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal
operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP
should include various components, such as:
Risk assessment: a process that identifies and evaluates the potential threats and vulnerabilities that
might affect the IT systems and infrastructure, and estimates the likelihood and impact of a
disruption or disaster
Recovery objectives: a process that defines and quantifies the acceptable levels of recovery for the IT
systems and infrastructure, such as the recovery point objective (RPO), which is the maximum
amount of data loss that can be tolerated, and the recovery time objective (RTO), which is the
maximum amount of downtime that can be tolerated
Recovery strategies: a process that selects and implements the appropriate methods and resources
to recover the IT systems and infrastructure, such as backup, replication, redundancy, or failover
DRP document: a document that outlines and details the scope, purpose, and features of the DRP,
such as the roles and responsibilities, the recovery procedures, and the contact information
Testing, training, and exercises: a process that evaluates and validates the effectiveness and
readiness of the DRP, and educates and trains the relevant stakeholders, such as the IT staff, the
management, and the users, on the DRP and their roles and responsibilities
Maintenance and review: a process that monitors and updates the DRP, and addresses any changes
or issues that might affect the DRP, such as the IT requirements, the threat landscape, or the
feedback and lessons learned
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives,
because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired
outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a
technique that compares the costs and benefits of different recovery strategies, and determines the
optimal one that provides the best value for money. A business objective is a goal or a target that the
organization wants to achieve through its IT systems and infrastructure, such as increasing the
productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the
cost/benefit analysis and business objectives can help to:
Optimize the use and allocation of the IT resources and funds for the recovery
Minimize the negative impacts and risks of a disruption or disaster on the IT systems and
infrastructure
Page 58
Answer: D
www.certifiedumps.com
Questions & Answers PDF
A continuous information security-monitoring program can BEST reduce risk through which of the
following?
A. Collecting security events and correlating them to identify anomalies
B. Facilitating system-wide visibility into the activities of critical user accounts
C. Encompassing people, process, and technology
D. Logging both scheduled and unscheduled system changes
Maximize the positive outcomes and benefits of the recovery for the IT systems and infrastructure
Support and enable the achievement of the organizational goals and targets through the IT systems
and infrastructure
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but
rather factors that should be considered or addressed when developing or implementing the
recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be
considered when developing the recovery strategies of a DRP, because they can affect the
functionality and interoperability of the IT systems and infrastructure, and may require additional
resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are
factors that should be addressed when implementing the recovery strategies of a DRP, because they
can determine the priority and urgency of the recovery for different applications, and may require
different levels of recovery objectives and resources. Budget constraints and requirements are
factors that should be considered when developing the recovery strategies of a DRP, because they
can limit the availability and affordability of the IT resources and funds for the recovery, and may
require trade-offs or compromises to balance them.
Explanation:
A continuous information security monitoring program can best reduce risk through encompassing
people, process, and technology. A continuous information security monitoring program is a process
that involves maintaining the ongoing awareness of the security status, events, and activities of a
system or network, by collecting, analyzing, and reporting the security data and information, using
various methods and tools. A continuous information security monitoring program can provide
several benefits, such as:
Improving the security and risk management of the system or network by identifying and addressing
the security weaknesses and gaps
Page 59
Question: 45
Answer: C
www.certifiedumps.com
Questions & Answers PDF Page 60
Enhancing the security and decision making of the system or network by providing the evidence and
information for the security analysis, evaluation, and reporting
Increasing the security and improvement of the system or network by providing the feedback and
input for the security response, remediation, and optimization
Facilitating the compliance and alignment of the system or network with the internal or external
requirements and standards
A continuous information security monitoring program can best reduce risk through encompassing
people, process, and technology, because it can ensure that the continuous information security
monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of
the system or network security. People, process, and technology are the three pillars of a continuous
information security monitoring program, and they represent the following:
People: the human resources that are involved in the continuous information security monitoring
program, such as the security analysts, the system administrators, the management, and the users.
People are responsible for defining the security objectives and requirements, implementing and
operating the security tools and controls, and monitoring and responding to the security events and
incidents.
Process: the procedures and policies that are followed in the continuous information security
monitoring program, such as the security standards and guidelines, the security roles and
responsibilities, the security workflows and tasks, and the security metrics and indicators. Process is
responsible for establishing and maintaining the security governance and compliance, ensuring the
security consistency and efficiency, and measuring and evaluating the security performance and
effectiveness.
Technology: the tools and systems that are used in the continuous information security monitoring
program, such as the security sensors and agents, the security loggers and collectors, the security
analyzers and correlators, and the security dashboards and reports. Technology is responsible for
supporting and enabling the security functions and capabilities, providing the security visibility and
awareness, and delivering the security data and information.
The other options are not the best ways to reduce risk through a continuous information security
monitoring program, but rather specific or partial ways that can contribute to the risk reduction.
Collecting security events and correlating them to identify anomalies is a specific way to reduce risk
through a continuous information security monitoring program, but it is not the best way, because it
only focuses on one aspect of the security data and information, and it does not address the other
aspects, such as the security objectives and requirements, the security controls and measures, and
the security feedback and improvement. Facilitating system-wide visibility into the activities of
critical user accounts is a partial way to reduce risk through a continuous information security
monitoring program, but it is not the best way, because it only covers one element of the system or
network security, and it does not cover the other elements, such as the security threats and
vulnerabilities, the security incidents and impacts, and the security response and remediation.
www.certifiedumps.com
Questions & Answers PDF
Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a
continuous information security monitoring program, but it is not the best way, because it only
focuses on one type of the security events and activities, and it does not focus on the other types,
such as the security alerts and notifications, the security analysis and correlation, and the security
reporting and documentation.
A Java program is being developed to read a file from computer A and write it to computer B, using a
third computer C. The program is not working as expected. What is the MOST probable security
feature of Java preventing the program from operating as intended?
A. Least privilege
B. Privilege escalation
C. Defense in depth
D. Privilege bracketing
Explanation:
The most probable security feature of Java preventing the program from operating as intended is
least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a
program) should only have the minimum amount of access or permissions that are necessary to
perform its function or task. Least privilege can help to reduce the attack surface and the potential
damage of a system or network, by limiting the exposure and impact of a subject in case of a
compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several
components, such as:
The Java Virtual Machine (JVM): a software layer that executes the Java bytecode and provides an
abstraction from the underlying hardware and operating system. The JVM enforces the security rules
and restrictions on the Java programs, such as the memory protection, the bytecode verification, and
the exception handling.
The Java Security Manager: a class that defines and controls the security policy and permissions for
the Java programs. The Java Security Manager can be configured and customized by the system
administrator or the user, and can grant or deny the access or actions of the Java programs, such as
Page 61
Topic 8, Software Development Security
Question: 46
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the PRIMARY risk with using open source software in a commercial
software construction?
the file I/O, the network communication, or the system properties.
The Java Security Policy: a file that specifies the security permissions for the Java programs, based on
the code source and the code signer. The Java Security Policy can be defined and modified by the
system administrator or the user, and can assign different levels of permissions to different Java
programs, such as the trusted or the untrusted ones.
The Java Security Sandbox: a mechanism that isolates and restricts
the Java programs that are downloaded or executed from untrusted sources, such as the web or the
network. The Java Security Sandbox applies the default or the minimal security permissions to the
untrusted Java programs, and prevents them from accessing or modifying the local resources or data,
such as the files, the databases, or the registry.
In this question, the Java program is being developed to read a file from computer A and write it to
computer B, using a third computer C. This means that the Java program needs to have the
permissions to perform the file I/O and the network communication operations, which are
considered as sensitive or risky actions by the Java security model. However, if the Java program is
running on computer C with the default or the minimal security permissions, such as in the Java
Security Sandbox, then it will not be able to perform these operations, and the program will not work
as expected. Therefore, the most probable security feature of Java preventing the program from
operating as intended is least privilege, which limits the access or permissions of the Java program
based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as
intended, but rather concepts or techniques that are related to security in general or in other
contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized
access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a
system or network. Privilege escalation can help an attacker to perform malicious actions or to
access sensitive resources or data, by bypassing the security controls or restrictions. Defense in
depth is a concept that states that a system or network should have multiple layers or levels of
security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can
help to protect a system or network from various threats and risks, by using different types of
security measures and controls, such as the physical, the technical, or the administrative ones.
Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or
permissions, to perform a specific function or task, and then return to its original or normal level.
Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time
and scope of its higher or lower access or permissions.
Page 62
Question: 47
www.certifiedumps.com
Explanation:
Questions & Answers PDF
A. Lack of software documentation
B. License agreements requiring release of modified code
C. Expiration of the license agreement
D. Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license
agreements requiring release of modified code. Open source software is software that uses publicly
available source code, which can be seen, modified, and distributed by anyone. Open source
software has some advantages, such as being affordable and flexible, but it also has some
disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction
is the license agreements that govern the use and distribution of the open source software. License
agreements are legal contracts that specify the rights and obligations of the parties involved in the
software, such as the original authors, the developers, and the users. License agreements can vary in
terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
Permissive licenses: license agreements that allow the developers and users to freely use, modify,
and distribute the open source software, with minimal or no restrictions. Examples of permissive
licenses are the MIT License, the Apache License, or the BSD License.
Copyleft licenses: license agreements that require the developers and users to share and distribute
the open source software and any modifications or derivatives of it, under the same or compatible
license terms and conditions. Examples of copyleft licenses are the GNU General Public License
(GPL), the GNU Lesser General Public License (LGPL), or the Mozilla Public License (MPL).
Mixed licenses: license agreements that combine the elements of permissive and copyleft licenses,
and may apply different license terms and conditions to different parts or components of the open
source software. Examples of mixed licenses are the Eclipse Public License (EPL), the Common
Development and Distribution License (CDDL), or the GNU Affero General Public License (AGPL).
The primary risk with using open source software in a commercial software construction is license
agreements requiring release of modified code, which are usually associated with copyleft licenses.
This means that if a commercial software construction uses or incorporates open source software
that is licensed under a copyleft license, then it must also release its own source code and any
Page 63
Answer: B
www.certifiedumps.com
Questions & Answers PDF
When in the Software Development Life Cycle (SDLC) MUST software security functional
requirements be defined?
A. After the system preliminary design has been developed and the data security categorization has
been performed
B. After the vulnerability analysis has been performed and before the system detailed design begins
C. After the system preliminary design has been developed and before the data security
categorization begins
D. After the business functional analysis and the data security categorization have been performed
Explanation:
Software security functional requirements must be defined after the business functional analysis and
the data security categorization have been performed in the Software Development Life Cycle
(SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying,
operating, and maintaining a system, using various models and methodologies, such as waterfall,
spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives
and activities, such as:
Page 64
modifications or derivatives of it, under the same or compatible copyleft license. This can pose a
significant risk for the commercial software construction, as it may lose its competitive advantage,
intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or
distribute it.
The other options are not the primary risks with using open source software in a commercial software
construction, but rather secondary or minor risks that may or may not apply to the open source software.
Lack of software documentation is a secondary risk with using open source software in a commercial
software construction, as it may affect the quality, usability, or maintainability of the open source software,
but it does not necessarily affect the rights or obligations of the commercial software construction. Expiratio
of the license agreement is a minor risk with using open source software in a commercial software
construction, as it may affect the availability or continuity of the open source software, but it is unlikely to
happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of
the software is a secondary risk with using open source software in a commercial software construction, as it
may affect the reliability, security, or performance of the open source software, but it can be mitigated or
avoided by choosing the open source software that has adequate or alternative support options.
Question: 48
Answer: D
www.certifiedumps.com
Questions & Answers PDF
System initiation: This phase involves defining the scope, purpose, and objectives of the system,
identifying the stakeholders and their needs and expectations, and establishing the project plan and
budget.
System acquisition and development: This phase involves designing the architecture and
components of the system, selecting and procuring the hardware and software resources, developing
and coding the system functionality and features, and integrating and testing the system modules
and interfaces.
System implementation: This phase involves deploying and installing the system to the production
environment, migrating and converting the data and applications from the legacy system, training
and educating the users and staff on the system operation and maintenance, and evaluating and
validating the system performance and effectiveness.
System operations and maintenance: This phase involves operating and monitoring the system
functionality and availability, maintaining and updating the system hardware and software,
resolving
and troubleshooting any issues or problems, and enhancing and optimizing the system
features and
capabilities.
Software security functional requirements are the specific and measurable security features
and
capabilities that the system must provide to meet the security objectives and requirements.
Software
security functional requirements are derived from the business functional analysis and the
data
security categorization, which are two tasks that are performed in the system initiation
phase of the
SDLC. The business functional analysis is the process of identifying and documenting the
business
functions and processes that the system must support and enable, such as the inputs,
outputs,
workflows, and tasks. The data security categorization is the process of determining the
security
level and impact of the system and its data, based on the confidentiality, integrity, and
availability
criteria, and applying the appropriate security controls and measures. Software security
functional
requirements must be defined after the business functional analysis and the data security
categorization have been performed, because they can ensure that the system design and
development are consistent and compliant with the security objectives and requirements,
and that
the system security is aligned and integrated with the business functions and processes. The
other options are not the phases of the SDLC when the software security functional
requirements must be defined, but rather phases that involve other tasks or activities
related to the
system design and development. After the system preliminary design has been developed
Page 65
www.certifiedumps.com
Which of the following is the BEST method to prevent malware from being introduced into a
production environment?
A. Purchase software from a limited list of retailers
B. Verify the hash key or certificate key of all updates
C. Do not permit programs, patches, or updates from the Internet
D. Test all new software in a segregated environment
Explanation:
Testing all new software in a segregated environment is the best method to prevent malware from
being introduced into a production environment. Malware is any malicious software that can harm or
compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be
introduced into a production environment through various sources, such as software downloads,
updates, patches, or installations. Testing all new software in a segregated environment involves
verifying and validating the functionality and security of the software before deploying it to the
production environment, using a separate system or network that is isolated and protected from the
production environment. Testing all new software in a segregated environment can provide several
benefits, such as:
Preventing the infection or propagation of malware to the production environment
Detecting and resolving any issues or risks caused by the software
Ensuring the compatibility and interoperability of the software with the production environment
Supporting and enabling the quality assurance and improvement of the software
The other options are not the best methods to prevent malware from being introduced into a
production environment, but rather methods that can reduce or mitigate the risk of malware, but not
eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk
of malware from being introduced into a production environment, but not prevent it. This method
involves obtaining software only from trusted and reputable sources, such as official vendors or
Questions & Answers PDF Page 66
has been developed and before the data security categorization begins is not the phase when the
software security functional requirements must be defined, but rather the phase when the system
architecture and components are designed, based on the system scope and objectives, and the data
security categorization is initiated and planned.
Question: 49
Answer: D
www.certifiedumps.com
Explanation:
The configuration management and control task of the certification and accreditation process is
incorporated in which phase of the System Development Life Cycle (SDLC)?
A. System acquisition and development
B. System operations and maintenance
C. System initiation
D. System implementation
The configuration management and control task of the certification and accreditation process is
incorporated in the system acquisition and development phase of the System Development Life
Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying,
operating, and maintaining a system, using various models and methodologies, such as waterfall,
spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives
Questions & Answers PDF Page 67
distributors, that can provide some assurance of the quality and security of the software. However,
this method does not guarantee that the software is free of malware, as it may still contain hidden or
embedded malware, or it may be tampered with or compromised during the delivery or installation
process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk
of malware from being introduced into a production environment, but not prevent it. This method
involves checking the authenticity and integrity of the software updates, patches, or installations, by
comparing the hash key or certificate key of the software with the expected or published value, using
cryptographic techniques and tools. However, this method does not guarantee that the software is
free of malware, as it may still contain malware that is not detected or altered by the hash key or
certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can
intercept or modify the software or the key. Not permitting programs, patches, or updates from the
Internet is a method that can reduce the risk of malware from being introduced into a production
environment, but not prevent it. This method involves restricting or blocking the access or download
of software from the Internet, which is a common and convenient source of malware, by applying
and enforcing the appropriate security policies and controls, such as firewall rules, antivirus
software, or web filters. However, this method does not guarantee that the software is free of
malware, as it may still be obtained or infected from other sources, such as removable media, email
attachments, or network shares.
Question: 50
Answer: A
www.certifiedumps.com
Questions & Answers PDF Page 68
and activities, such as:
System initiation: This phase involves defining the scope, purpose, and objectives of the system,
identifying the stakeholders and their needs and expectations, and establishing the project plan and
budget.
System acquisition and development: This phase involves designing the architecture and
components of the system, selecting and procuring the hardware and software resources, developing
and coding the system functionality and features, and integrating and testing the system modules
and interfaces.
System implementation: This phase involves deploying and installing the system to the production
environment, migrating and converting the data and applications from the legacy system, training
and educating the users and staff on the system operation and maintenance, and evaluating and
validating the system performance and effectiveness.
System operations and maintenance: This phase involves operating and monitoring the system
functionality and availability, maintaining and updating the system hardware and software, resolving
and troubleshooting any issues or problems, and enhancing and optimizing the system features and
capabilities.
The certification and accreditation process is a process that involves assessing and verifying the
security and compliance of a system, and authorizing and approving the system operation and
maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The
certification and accreditation process can be divided into several tasks, each with its own objectives
and activities, such as:
Security categorization: This task involves determining the security level and impact of the system
and its data, based on the confidentiality, integrity, and availability criteria, and applying the
appropriate security controls and measures.
Security planning: This task involves defining the security objectives and requirements of the system,
identifying the roles and responsibilities of the security stakeholders, and developing and
documenting the security plan and policy.
Security implementation: This task involves implementing and enforcing the security controls and
measures for the system, according to the security plan and policy, and ensuring the security
functionality and compatibility of the system.
Security assessment: This task involves evaluating and testing the security effectiveness and
compliance of the system, using various techniques and tools, such as audits, reviews, scans, or
penetration tests, and identifying and reporting any security weaknesses or gaps.
Security authorization: This task involves reviewing and approving the security assessment results
and recommendations, and granting or denying the authorization for the system operation and
www.certifiedumps.com
Questions & Answers PDF
maintenance, based on the risk and impact analysis and the security objectives and requirements.
Security monitoring: This task involves monitoring and updating the security status and activities of
the system, using various methods and tools, such as logs, alerts, or reports, and addressing and
resolving any security issues or changes.
The configuration management and control task of the certification and accreditation process is
incorporated in the system acquisition and development phase of the SDLC, because it can ensure
that the system design and development are consistent and compliant with the security objectives
and requirements, and that the system changes are controlled and documented. Configuration
management and control is a process that involves establishing and maintaining the baseline and the
inventory of the system components and resources, such as hardware, software, data, or
documentation, and tracking and recording any modifications or updates to the system components
and resources, using various techniques and tools, such as version control, change control, or
configuration audits. Configuration management and control can provide several benefits, such as:
Improving the quality and security of the system design and development by identifying and
addressing any errors or inconsistencies
Enhancing the performance and efficiency of the system design and development by optimizing the
use and allocation of the system components and resources
Increasing the compliance and alignment of the system design and development with the security
objectives and requirements by applying and enforcing the security controls and measures
Facilitating the monitoring and improvement of the system design and development by providing the
evidence and information for the security assessment and authorization
The other options are not the phases of the SDLC that incorporate the configuration management
and control task of the certification and accreditation process, but rather phases that involve other
tasks of the certification and accreditation process. System operations and maintenance is a phase of
the SDLC that incorporates the security monitoring task of the certification and accreditation process,
because it can ensure that the system operation and maintenance are consistent and compliant with
the security objectives and requirements, and that the system security is updated and improved.
System initiation is a phase of the SDLC that incorporates the security categorization and security
planning tasks of the certification and accreditation process, because it can ensure that the system
scope and objectives are defined and aligned with the security objectives and requirements, and that
the security plan and policy are developed and documented. System implementation is a phase of
the SDLC that incorporates the security assessment and security authorization tasks of the
certification and accreditation process, because it can ensure that the system deployment and
installation are evaluated and verified for the security effectiveness and compliance, and that the
system operation and maintenance are authorized and approved based on the risk and impact
analysis and the security objectives and requirements.
Page 69
www.certifiedumps.com
Questions & Answers PDF
What is the BEST approach to addressing security issues in legacy web applications?
A. Debug the security issues
B. Migrate to newer, supported applications where possible
C. Conduct a security assessment
D. Protect the legacy application with a web application firewall
Explanation:
Migrating to newer, supported applications where possible is the best approach to addressing
security issues in legacy web applications. Legacy web applications are web applications that are
outdated, unsupported, or incompatible with the current technologies and standards. Legacy web
applications may have various security issues, such as:
Vulnerabilities and bugs that are not fixed or patched by the developers or vendors
Weak or obsolete encryption and authentication mechanisms that are easily broken or bypassed by
attackers
Lack of compliance with the security policies and regulations that are applicable to the web
applications
Incompatibility or interoperability issues with the newer web browsers, operating systems, or
platforms that are used by the users or clients
Migrating to newer, supported applications where possible is the best approach to addressing
security issues in legacy web applications, because it can provide several benefits, such as:
Enhancing the security and performance of the web applications by using the latest technologies and
standards that are more secure and efficient
Reducing the risk and impact of the web application attacks by eliminating or minimizing the
vulnerabilities and bugs that are present in the legacy web applications
Increasing the compliance and alignment of the web applications with the security policies and
regulations that are applicable to the web applications
Improving the compatibility and interoperability of the web applications with the newer web
Page 70
Question: 51
Answer: B
www.certifiedumps.com
Explanation:
Which of the following methods protects Personally Identifiable Information (PII) by use of a full
replacement of the data element?
A. Transparent Database Encryption (TDE)
B. Column level database encryption
C. Volume encryption
D. Data tokenization
Questions & Answers PDF Page 71
browsers, operating systems, or platforms that are used by the users or clients
The other options are not the best approaches to addressing security issues in legacy web
applications, but rather approaches that can mitigate or remediate the security issues, but not
eliminate or prevent them. Debugging the security issues is an approach that can mitigate the
security issues in legacy web applications, but not the best approach, because it involves identifying
and fixing the errors or defects in the code or logic of the web applications, which may be difficult or
impossible to do for the legacy web applications that are outdated or unsupported. Conducting a
security assessment is an approach that can remediate the security issues in legacy web applications,
but not the best approach, because it involves evaluating and testing the security effectiveness and
compliance of the web applications, using various techniques and tools, such as audits, reviews,
scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which
may not be sufficient or feasible to do for the legacy web applications that are incompatible or
obsolete. Protecting the legacy application with a web application firewall is an approach that can
mitigate the security issues in legacy web applications, but not the best approach, because it involves
deploying and configuring a web application firewall, which is a security device or software that
monitors and filters the web traffic between the web applications and the users or clients, and blocks
or allows the web requests or responses based on the predefined rules or policies, which may not be
effective or efficient to do for the legacy web applications that have weak or outdated encryption or
authentication mechanisms.
Topic 9, Exam Set A
Question: 52
Answer: D
www.certifiedumps.com
Which of the following elements MUST a compliant EU-US Safe Harbor Privacy Policy contain?
A. An explanation of how long the data subject's collected information will be retained for and how it
will be eventually disposed.
B. An explanation of who can be contacted at the organization collecting the information if
corrections are required by the data subject.
C. An explanation of the regulatory frameworks and compliance standards the information collecting
organization adheres to.
D. An explanation of all the technologies employed by the collecting organization in gathering
information on the data subject.
Explanation: The EU-US Safe Harbor Privacy Policy is a framework that was established in
2000 to enable the
transfer of personal data from the European Union to the United States, while ensuring
adequate
protection of the data subject’s privacy rights3. The framework was invalidated by the
European
Court of Justice in 2015, and replaced by the EU-US Privacy Shield in 20164. However, the
Safe
Harbor Privacy Policy still serves as a reference for the principles and requirements of data
protection
across the Atlantic. One of the elements that a compliant Safe Harbor Privacy Policy must
contain is
an explanation of who can be contacted at the organization collecting the information if
corrections
are required by the data subject. This is part of the principle of access, which states that
individuals
must have access to their personal information and be able to correct, amend, or delete it
where it is
inaccurate. Reference: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page
2954: CISSP
For Dummies, 7th Edition, Chapter 10, page 284. : Official (ISC)2 CISSP CBK Reference, 5th
Questions & Answers PDF Page 72
Data tokenization is a method of protecting PII by replacing the sensitive data element with a non-
sensitive equivalent, called a token, that has no extrinsic or exploitable meaning or value1. The token
is then mapped back to the original data element in a secure database. This way, the PII is not
exposed in the data processing or storage, and only authorized parties can access the original data
element. Data tokenization is different from encryption, which transforms the data element into a
ciphertext that can be decrypted with a key. Data tokenization does not require a key, and the token
cannot be reversed to reveal the original data element2. Reference: 1: CISSP All-in-One Exam Guide,
Eighth Edition, Chapter 5, page 2812: CISSP For Dummies, 7th Edition, Chapter 10, page 289.
Question: 53
Answer: B
www.certifiedumps.com
Questions & Answers PDF
A. overcome the problems of key assignments.
B. monitor the opening of windows and doors.
C. trigger alarms when intruders are detected.
D. lock down a facility during an emergency.
The PRIMARY purpose of a security awareness program is to
A. ensure that everyone understands the organization's policies and procedures.
B. communicate that access to information will be granted on a need-to-know basis.
C. warn all users that access to all systems will be monitored on a daily basis.
D. comply with regulations related to data and information protection.
As one component of a physical security system, an Electronic Access Control (EAC) token is BEST
known for its ability to
Explanation:
The primary purpose of a security awareness program is to ensure that everyone understands the
organization’s policies and procedures related to information security. A security awareness program
is a set of activities, materials, or events that aim to educate and inform the employees, contractors,
partners, and customers of the organization about the security goals, principles, and practices of the
organization1. A security awareness program can help to create a security culture, improve the
security behavior, and reduce the human errors or risks. Communicating that access to information
will be granted on a need-to-know basis, warning all users that access to all systems will be
monitored on a daily basis, and complying with regulations related to data and information
protection are not the primary purposes of a security awareness program, as they are more specific
or secondary objectives that may be part of the program, but not the main goal. Reference: 1: CISSP
All-in-One Exam Guide, Eighth Edition, Chapter 1, page 28.
Page 73
Question: 54
Question: 55
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Which one of the following is a fundamental objective in handling an incident?
A. To restore control of the affected systems
B. To confiscate the suspect's computers
C. To prosecute the attacker
D. To perform full backups of the system
Explanation:
A fundamental objective in handling an incident is to restore control of the affected systems as soon
as possible. An incident is an event or a situation that violates or threatens the security,
confidentiality, integrity, or availability of an organization’s information assets or resources3.
Handling an incident is the process of responding to, containing, analyzing, recovering from, and
reporting on an incident, with the aim of minimizing the impact and preventing the recurrence of the
incident. Restoring control of the affected systems is a crucial objective in handling an incident, as it
can help to resume the normal operations, services, and functions of the organization, and to
Explanation:
An Electronic Access Control (EAC) token is best known for its ability to overcome the problems of
key assignments in a physical security system. An EAC token is a device that can be used to
authenticate a user or grant access to a physical area or resource, such as a door, a gate, or a locker2.
An EAC token can be a smart card, a magnetic stripe card, a proximity card, a key fob, or a biometric
device. An EAC token can overcome the problems of key assignments, which are the issues or
challenges of managing and distributing physical keys to authorized users, such as lost, stolen,
duplicated, or unreturned keys. An EAC token can provide more security, convenience, and flexibility
than a physical key, as it can be easily activated, deactivated, or replaced, and it can also store
additional information or perform other functions. Monitoring the opening of windows and doors,
triggering alarms when intruders are detected, and locking down a facility during an emergency are
not the abilities that an EAC token is best known for, as they are more related to the functions of
other components of a physical security system, such as sensors, alarms, or locks. Reference: 2: CISSP
For Dummies, 7th Edition, Chapter 9, page 253.
Page 74
Question: 56
Answer: A
Answer: A
www.certifiedumps.com
A. Communication
B. Planning
C. Recovery
D. Escalation
A. user to the audit process.
B. computer system to the user.
C. user's access to all authorized objects.
The process of mutual authentication involves a computer system authenticating a user and
authenticating the
In the area of disaster planning and recovery, what strategy entails the presentation of information
about the plan?
Explanation:
Communication is the strategy that involves the presentation of information about the disaster
recovery plan to the stakeholders, such as management, employees, customers, vendors, and
regulators. Communication ensures that everyone is aware of their roles and responsibilities in the
event of a disaster, and that the plan is updated and tested regularly12. Reference: 1: CISSP All-in-
One Exam Guide, Eighth Edition, Chapter 10, page 10192: CISSP For Dummies, 7th Edition, Chapter
10, page 343.
Questions & Answers PDF Page 75
mitigate the damage or loss caused by the incident. Confiscating the suspect’s computers,
prosecuting the attacker, and performing full backups of the system are not fundamental objectives
in handling an incident, as they are more related to the investigation, legal, or recovery aspects of
the incident, which may not be as urgent or essential as restoring control of the affected
systems. Reference: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 7, page 375. : CISSP
All-in-One Exam Guide, Eighth Edition, Chapter 9, page 559.
Question: 57
Question: 58
Answer: A
www.certifiedumps.com
A. Program change control
B. Regression testing
C. Export exception control
D. User acceptance testing
Questions & Answers PDF
D. computer system to the audit process.
Which one of the following describes granularity?
A. Maximum number of entries available in an Access Control List (ACL)
B. Fineness to which a trusted system can authenticate users
What maintenance activity is responsible for defining, implementing, and testing updates to
application systems?
Explanation:
Program change control is the maintenance activity that is responsible for defining, implementing,
and testing updates to application systems. Program change control ensures that the changes are
authorized, documented, reviewed, tested, and approved before being deployed to the production
environment. Program change control also maintains a record of the changes and their impact on the
system . Reference: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 823. : CISSP For
Dummies, 7th Edition, Chapter 8, page 263.
Explanation:
Mutual authentication is the process of verifying the identity of both parties in a communication. The
computer system authenticates the user by verifying their credentials, such as username and
password, biometrics, or tokens. The user authenticates the computer system by verifying its
identity, such as a digital certificate, a trusted third party, or a challenge-response
mechanism34. Reference: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5154:
CISSP For Dummies, 7th Edition, Chapter 5, page 151.
Page 76
Question: 59
Question: 60
Answer: B
Answer: A
www.certifiedumps.com
Questions & Answers PDF
C. Number of violations divided by the number of total accesses
D. Fineness to which an access control system can be adjusted
An organization is selecting a service provider to assist in the consolidation of multiple
computing sites including development, implementation and ongoing support of various
computer systems. Which of the following MUST be verified by the Information Security
Department?
Explanation:
Granularity is the degree of detail or precision that an access control system can provide. A granular
access control system can specify different levels of access for different users, groups, resources, or
conditions. For example, a granular firewall can allow or deny traffic based on the source,
destination, port, protocol, time, or other criteria
Explanation: The Information Security Department must verify that the service provider will
impose controls and
protections that meet or exceed the current systems controls and produce audit logs as
verification.
This is to ensure that the service provider will maintain or improve the security posture of
the
organization, and that the organization will be able to monitor and audit the service
provider’s
performance and compliance. The service provider’s policies may or may not be consistent
with
ISO/IEC27001, but this is not a mandatory requirement, as long as the service provider can
meet the
organization’s security needs and expectations. The service provider may or may not
A. The service provider's policies are consistent with ISO/IEC27001 and there is evidence that the
service provider is following those policies.
B. The service provider will segregate the data within its systems and ensure that each region's
policies are met.
C. The service provider will impose controls and protections that meet or exceed the current systems
controls and produce audit logs as verification.
D. The service provider's policies can meet the requirements imposed by the new environment even
if they differ from the organization's current policies.
Page 77
Question: 61
Answer: C
Answer: D
www.certifiedumps.com
A. Signature
B. Inference
C. Induction
D. Heuristic
What technique BEST describes antivirus software that detects viruses by watching anomalous
behavior?
Which of the following is the FIRST action that a system administrator should take when it is
revealed during a penetration test that everyone in an organization has unauthorized access to a
server holding sensitive data?
A. Immediately document the finding and report to senior management.
B. Use system privileges to alter the permissions to secure the server
C. Continue the testing to its completion and then inform IT management
D. Terminate the penetration test and pass the finding to the server management team
Explanation:
Heuristic is the technique that best describes antivirus software that detects viruses by watching
anomalous behavior. Heuristic is a method of virus detection that analyzes the behavior and
characteristics of the program or file, rather than comparing it to a known signature or pattern.
Heuristic can detect unknown or new viruses that have not been identified or cataloged by the
antivirus software. However, heuristic can also generate false positives, as some legitimate programs
or files may exhibit suspicious or unusual behavior12. Reference: 1: What is Heuristic Analysis?32:
Heuristic Virus Detection4
Questions & Answers PDF Page 78
regulatory obligations. The service provider’s policies may differ from the organization’s current
policies, as long as they can meet the requirements imposed by the new environment, and are
agreed upon by both parties. Reference: 1: How to Choose a Managed Security Service Provider
(MSSP)22: 10 Questions to Ask Your Managed Security Service Provider3
Question: 62
Question: 63
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Explanation:
This is the principle of open design, which states that the security of a system or mechanism should
rely on the strength of its key or algorithm, rather than on the obscurity of its design or
implementation. This principle is based on the assumption that the adversary has full knowledge of
the system or mechanism, and that the security should still hold even if that is the case. The other
options are not consistent with the principle of open design, as they either imply that the security
depends on hiding or protecting the design or implementation (A and B), or that the user’s
knowledge or privileges affect the security ©. Reference: CISSP All-in-One Exam Guide, Eighth
Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109.
Which of the following BEST represents the principle of open design?
A. Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer
system.
B. Algorithms must be protected to ensure the security and interoperability of the designed system.
C. A knowledgeable user should have limited privileges on the system to prevent their ability to
compromise security capabilities.
D. The security of a mechanism should not depend on the secrecy of its design or implementation.
Explanation:
If a system administrator discovers a serious security breach during a penetration test, such as
unauthorized access to a server holding sensitive data, the first action that he or she should take is to
immediately document the finding and report it to senior management. This is because senior
management is ultimately responsible for the security of the organization and its assets, and they
need to be aware of the situation and take appropriate actions to mitigate the risk and prevent
further damage. Documenting the finding is also important to provide evidence and support for the
report, and to comply with any legal or regulatory requirements. Using system privileges to alter the
permissions to secure the server, continuing the testing to its completion, or terminating the
penetration test and passing the finding to the server management team are not the first actions that
a system administrator should take, as they may not address the root cause of the problem, may
interfere with the ongoing testing, or may delay the notification of senior management.
Page 79
Question: 64
Question: 65
Answer: A
Answer: D
www.certifiedumps.com
Explanation:
Questions & Answers PDF
A security consultant has been asked to research an organization's legal obligations to protect
privacy-related information. What kind of reading material is MOST relevant to this project?
A. The organization's current security policies concerning privacy issues
B. Privacy-related regulations enforced by governing bodies applicable to the organization
C. Privacy best practices published by recognized security standards organizations
D. Organizational procedures designed to protect privacy information
According to best practice, which of the following groups is the MOST effective in performing an
information security compliance audit?
A. In-house security administrators
B. In-house Network Team
C. Disaster Recovery (DR) Team
D. External consultants
Explanation:
The most relevant reading material for researching an organization’s legal obligations to protect
privacy-related information is the privacy-related regulations enforced by governing bodies
applicable to the organization. These regulations define the legal requirements, standards, and
penalties for collecting, processing, storing, and disclosing personal or sensitive information of
individuals or entities. The organization must comply with these regulations to avoid legal liabilities,
fines, or sanctions. The other options are not as relevant as privacy-related regulations, as they
either do not reflect the legal obligations of the organization (A and C), or do not apply to all types of
privacy-related information (D). Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1,
page 22; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31.
Page 80
Question: 66
Answer: B
Answer: D
Topic 10, Exam Set B
www.certifiedumps.com
An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers
having digital certificates. What is the security benefit of this implementation?
A. Clients can authenticate themselves to the servers.
B. Mutual authentication is available between the clients and servers.
C. Servers are able to issue digital certificates to the client.
D. Servers can authenticate themselves to the client.
Explanation:
A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating,
managing, distributing, using, storing, and revoking digital certificates, which are electronic
documents that bind a public key to an identity. A digital certificate can be used to authenticate the
identity of an entity, such as a person, a device, or a server, that possesses the corresponding private
key. An organization can implement a partial PKI with only the servers having digital certificates,
which means that only the servers can prove their identity to the clients, but not vice versa. The
security benefit of this implementation is that servers can authenticate themselves to the client,
which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers.
Clients can authenticate themselves to the servers, mutual authentication is available between the
clients and servers, and servers are able to issue digital certificates to the client are not the security
benefits of this implementation, as they require the clients to have digital certificates as
well. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and
Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5,
Cryptography and Symmetric Key Algorithms, page 631.
Questions & Answers PDF Page 81
According to best practice, the most effective group in performing an information security compliance
audit is external consultants. External consultants are independent and objective third parties that
can provide unbiased and impartial assessment of the organization’s compliance with the security
policies, standards, and regulations. External consultants can also bring expertise, experience, and
best practices from other organizations and industries, and offer recommendations for improvement.
The other options are not as effective as external consultants, as they either have a conflict of
interest or lack of independence (A and B), or do not have the primary role or responsibility of
conducting compliance audits ©. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5,
page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302.
Question: 67
Question: 68
Answer: D
www.certifiedumps.com
Questions & Answers PDF
When implementing a secure wireless network, which of the following supports authentication and
authorization for individual client endpoints.
A. Temporal Key Integrity Protocol (TKIP)
B. Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK)
C. Wi-Fi Protected Access 2 (WPA2) Enterprise
D. Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)
A thorough review of an organization's audit logs finds that a disgruntled network
administrator has intercepted emails meant for the Chief Executive Officer (CEO) and
changed them before forwarding them to their intended recipient. What type of attack has
MOST likely occurred?
Explanation:
When implementing a secure wireless network, the option that supports authentication and
authorization for individual client endpoints is Wi-Fi Protected Access 2 (WPA2) Enterprise. WPA2 is a
security protocol that provides encryption and authentication for wireless networks, based on the
IEEE 802.11i standard. WPA2 has two modes: Personal and Enterprise. WPA2 Personal uses a Pre-
Shared Key (PSK) that is shared among all the devices on the network, and does not require a
separate authentication server. WPA2 Enterprise uses an Extensible Authentication Protocol (EAP)
that authenticates each device individually, using a username and password or a certificate, and
requires a Remote Authentication Dial-In User Service (RADIUS) server or another authentication
server. WPA2 Enterprise provides more security and granularity than WPA2 Personal, as it can
support different levels of access and permissions for different users or groups, and can prevent
unauthorized or compromised devices from joining the network. Temporal Key Integrity Protocol
(TKIP), Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK), and Counter Mode with Cipher Block
Chaining Message Authentication Code Protocol (CCMP) are not the options that support
authentication and authorization for individual client endpoints, as they are related to the encryption
or integrity of the wireless data, not the identity or access of the wireless devices. Reference: CISSP
All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page
506. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network
Security, page 522.
Page 82
Question: 69
Answer: C
www.certifiedumps.com
Questions & Answers PDF
A. Spoofing
B. Eavesdropping
C. Man-in-the-middle
D. Denial of service
Which of the following is the MOST effective attack against cryptographic hardware modules?
A. Plaintext
B. Brute force
C. Power analysis
D. Man-in-the-middle (MITM)
Explanation:
The type of attack that has most likely occurred when a disgruntled network administrator has
intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding
them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of
attack that involves an attacker intercepting, modifying, or redirecting the communication between
two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or
impersonate one of the parties, to achieve malicious goals, such as stealing information,
compromising security, or disrupting service. A MITM attack can be performed on various types of
networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of
service are not the types of attack that have most likely occurred in this scenario, as they do not
involve the modification or manipulation of the communication between the parties, but rather the
falsification, observation, or prevention of the communication. Reference: CISSP All-in-One Exam
Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2
CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Explanation:
The most effective attack against cryptographic hardware modules is power analysis. Power analysis
Page 83
Question: 70
Answer: C
Answer: C
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the MOST difficult to enforce when using cloud computing?
A. Data access
B. Data backup
C. Data recovery
D. Data disposal
is a type of side-channel attack that exploits the physical characteristics or behavior of a
cryptographic device, such as a smart card, a hardware security module, or a cryptographic
processor, to extract secret information, such as keys, passwords, or algorithms. Power
analysis measures the power consumption or the electromagnetic radiation of the device,
and analyzes the variations or patterns that correspond to the cryptographic operations or
the data being processed. Power analysis can reveal the internal state or the logic of the
device, and can bypass the security mechanisms or the tamper resistance of the device.
Power analysis can be performed with low-cost and widely available equipment, and can
be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM)
are not the most effective attacks against cryptographic hardware modules, as they are
related to the encryption or transmission of the data, not the physical properties or
behavior of the device. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5,
Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference,
Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644.
Explanation:
The most difficult thing to enforce when using cloud computing is data disposal. Data disposal is the
process of permanently deleting or destroying the data that is no longer needed or authorized, in a
secure and compliant manner. Data disposal is challenging when using cloud computing, because the
data may be stored or replicated in multiple locations, devices, or servers, and the cloud provider
may not have the same policies, procedures, or standards as the cloud customer. Data disposal may
also be affected by the legal or regulatory requirements of different jurisdictions, or the contractual
obligations of the cloud service agreement. Data access, data backup, and data recovery are not the
most difficult things to enforce when using cloud computing, as they can be achieved by using
encryption, authentication, authorization, replication, or restoration techniques, and by specifying
the service level agreements and the roles and responsibilities of the cloud provider and the cloud
customer. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture
and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security
Architecture and Engineering, page 353.
Page 84
Question: 71
Answer: D
www.certifiedumps.com
Explanation:
Questions & Answers PDF
DRAG DROP
Given the various means to protect physical and logical assets, match the access management area
to the technology.
Facilities - Window
Devices - Firewall
Information Systems - Authentication
(Note: “Encryption” is not matched as it can be applied in various areas including Devices and
Information Systems.)
In the context of protecting physical and logical assets, the access management areas and the
technologies can be matched as follows: - Facilities are the physical buildings or locations that house
the organization’s assets, such as servers, computers, or documents. Facilities can be protected by
using windows that are resistant to breakage, intrusion, or eavesdropping, and that can prevent the
leakage of light or sound from inside the facilities. - Devices are the hardware or software
components that enable the communication or processing of data, such as routers, switches,
firewalls, or applications. Devices can be protected by using firewalls that can filter, block, or allow
the network traffic based on the predefined rules or policies, and that can prevent unauthorized or
malicious access or attacks to the devices or the data. - Information Systems are the systems that
store, process, or transmit data, such as databases, servers, or applications. Information Systems can
be protected by using authentication mechanisms that can verify the identity or the credentials of
the users or the devices that request access to the information systems, and that can prevent
impersonation or spoofing of the users or the devices. - Encryption is a technology that can be
applied in various areas, such as Devices or Information Systems, to protect the confidentiality or the
integrity of the data. Encryption can transform the data into an unreadable or unrecognizable form,
Page 85
Question: 72
Answer:
www.certifiedumps.com
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be
necessary to address these concerns.
What is the BEST reason for the organization to pursue a plan to mitigate client-based attacks?
A. Client privilege administration is inherently weaker than server privilege administration.
B. Client hardening and management is easier on clients than on servers.
C. Client-based attacks are more common and easier to exploit than server and network based
attacks.
D. Client-based attacks have higher financial impact.
Explanation:
The best reason for the organization to pursue a plan to mitigate client-based attacks is that client-
based attacks are more common and easier to exploit than server and network based attacks. Client-
based attacks are the attacks that target the client applications or systems, such as web browsers,
email clients, or media players, and that can exploit the vulnerabilities or weaknesses of the client
software or configuration, or the user behavior or interaction. Client-based attacks are more
common and easier to exploit than server and network based attacks, because the client applications
or systems are more exposed and accessible to the attackers, the client software or configuration is
more diverse and complex to secure, and the user behavior or interaction is more unpredictable and
prone to errors or mistakes. Therefore, the organization needs to pursue a plan to mitigate client-
based attacks, as they pose a significant security threat or risk to the organization’s data, systems, or
network. Client privilege administration is inherently weaker than server privilege administration,
client hardening and management is easier on clients than on servers, and client-based attacks have
higher financial impact are not the best reasons for the organization to pursue a plan to mitigate
client-based attacks, as they are not supported by the facts or evidence, or they are not relevant or
specific to the client-side security. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8,
Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition,
Chapter 8, Software Development Security, page 1066.
Questions & Answers PDF Page 86
using a secret key or an algorithm, and can prevent the interception, disclosure, or modification of
the data by unauthorized parties.
Question: 73
Answer: C
www.certifiedumps.com
Questions & Answers PDF
A. vulnerabilities are proactively identified.
B. audits are regularly performed and reviewed.
C. backups are regularly performed and validated.
D. risk is lowered to an acceptable level.
Refer to the information below to answer the question.
An organization has hired an information security officer to lead their security department. The
officer has adequate people resources but is lacking the other necessary components to have an
effective security program. There are numerous initiatives requiring security involvement.
The security program can be considered effective when
Refer to the information below to answer the question.
In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of
sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users,
while Table B lists the security classes of four different files.
Explanation:
The security program can be considered effective when the risk is lowered to an acceptable level.
The risk is the possibility or the likelihood of a threat exploiting a vulnerability, and causing a
negative impact or a consequence to the organization’s assets, operations, or objectives. The
security program is a set of activities and initiatives that aim to protect the organization’s
information systems and resources from the security threats and risks, and to support the
organization’s business needs and requirements. The security program can be considered effective
when it achieves its goals and objectives, and when it reduces the risk to a level that is acceptable or
tolerable by the organization, based on its risk appetite or tolerance. Vulnerabilities are proactively
identified, audits are regularly performed and reviewed, and backups are regularly performed and
validated are not the criteria to measure the effectiveness of the security program, as they are
related to the methods or the processes of the security program, not the outcomes or the results of
the security program. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security
and Risk Management, page 24. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security
and Risk Management, page 39.
Page 87
Question: 74
Question: 75
Answer: D
www.certifiedumps.com
Questions & Answers PDF
In a Bell-LaPadula system, which user cannot write to File 3?
A. User A
B. User B
C. User C
D. User D
Refer to the information below to answer the question.
A large, multinational organization has decided to outsource a portion of their Information
Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for
the design, development, testing, and support of several critical, customer-based applications used
by the organization.
What additional considerations are there if the third party is located in a different country?
Explanation:
In a Bell-LaPadula system, a user cannot write data to a file that has a lower security classification
than their own. This is because of the star property (*property) of the Bell-LaPadula model, which
states that a subject with a given security clearance may write data to an object if and only if the
object’s security level is greater than or equal to the subject’s security level. This rule is also known
as the no write-down rule, as it prevents the leakage of information from a higher level to a lower
level. In this question, User D has a Top Secret clearance, and File 3 has a Secret security class.
Therefore, User D cannot write to File 3, as they have a higher clearance than the security class of
File 3, and they would violate the star property by writing down information to a lower level. User A,
User B, and User C can write to File 3, as they have the same or lower clearances than the security
class of File 3, and they would not violate the star property by writing up or across information to a
higher or equal level. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4,
Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition,
Chapter 4, Communication and Network Security, page 514.
Page 88
Question: 76
Answer: D
www.certifiedumps.com
HOTSPOT
Questions & Answers PDF
A. The organizational structure of the third party and how it may impact timelines within the
organization
B. The ability of the third party to respond to the organization in a timely manner and with accurate
information
C. The effects of transborder data flows and customer expectations regarding the storage or
processing of their data
D. The quantity of data that must be provided to the third party and how it is to be used
Explanation: The additional considerations that are there if the third party is located in a
different country are the
effects of transborder data flows and customer expectations regarding the storage or
processing of
their data. Transborder data flows are the movements or the transfers of data across the
national or
the regional borders, such as the internet, the cloud, or the outsourcing. Transborder data
flows can
have various effects on the security, the privacy, the compliance, or the sovereignty of the
data,
depending on the laws, the regulations, the standards, or the cultures of the different
countries or
regions involved. Customer expectations are the beliefs or the assumptions of the
customers about
the quality, the performance, or the satisfaction of the products or the services that they
use or
purchase. Customer expectations can vary depending on the needs, the preferences, or the
values of
the customers, and they can influence the reputation, the loyalty, or the profitability of the
organization. The organization should consider the effects of transborder data flows and
customer
expectations regarding the storage or processing of their data, as they can affect the
security, the
privacy, the compliance, or the sovereignty of the data, and they can impact the reputation,
the
loyalty, or the profitability of the organization. The organization should also consider the
legal, the
contractual, the ethical, or the cultural implications of the transborder data flows and
customer
expectations, and they should communicate, negotiate, or align with the third party and
the
customers accordingly. The organization should not consider the organizational structure of
Page 89
Question: 77
Answer: C
www.certifiedumps.com
Questions & Answers PDF
DRAG DROP
Place the following information classification steps in sequential order.
Identify the component that MOST likely lacks digital accountability related to information access.
Click on the correct device in the image below.
Explanation:
Storage Area Network (SAN): SANs are designed for centralized storage and access control
mechanisms can be implemented to track users and their activities.
Backup Media, Backup Server, Database Server, Web Server: These are typically secured components
within a system and would likely have user authentication and access logging mechanisms.
Laptop: Laptops are portable and might be used outside the organization's secure network. They may
not have the same level of monitoring and logging as the other components, making it more difficult
to hold users accountable for their access to information.
Page 90
Question: 78
Answer:
www.certifiedumps.com
Questions & Answers PDF
Explanation:
The following information classification steps should be placed in sequential order as follows:
Document the information assets
Assign a classification level
Apply the appropriate security markings
Conduct periodic classification reviews
Declassify information when appropriate
Page 91
Answer:
www.certifiedumps.com
Questions & Answers PDF
Information classification is a process or a method of categorizing the information assets based on
their sensitivity, criticality, or value, and applying the appropriate security controls or measures to
protect them. Information classification can help to ensure the confidentiality, the integrity, and the
availability of the information assets, and to support the security, the compliance, or the business
objectives of the organization. The information classification steps are the activities or the tasks that
are involved in the information classification process, and they should be performed in a sequential
order, as follows:
Document the information assets: This step involves identifying, inventorying, and describing the
information assets that are owned, used, or managed by the organization, such as the data, the
documents, the records, or the media. This step can help to determine the scope, the ownership, or
the characteristics of the information assets, and to prepare for the next steps of the information
classification process.
Assign a classification level: This step involves assigning a classification level or a label to each
information asset, based on the sensitivity, the criticality, or the value of the information asset, and
the impact or the consequence of the unauthorized or the malicious access, disclosure, modification,
or destruction of the information asset. The classification level or the label can indicate the degree or
the extent of the security protection or the handling that the information asset requires, such as the
confidentiality, the integrity, or the availability. The classification level or the label can vary
depending on the organization’s policies, standards, or regulations, but some common examples are
public, internal, confidential, or secret.
Apply the appropriate security markings: This step involves applying the appropriate security
markings or indicators to the information assets, based on the classification level or the label of the
information assets. The security markings or indicators can include the visual, the physical, or the
electronic symbols, signs, or codes that show the classification level or the label of the information
assets, such as the banners, the headers, the footers, the stamps, the stickers, the tags, or the
metadata. The security markings or indicators can help to communicate, inform, or remind the users
or the entities of the security protection or the handling that the information assets require, and to
prevent or reduce the risk of the unauthorized or the malicious access, disclosure, modification, or
destruction of the information assets.
Conduct periodic classification reviews: This step involves conducting periodic classification reviews
or assessments of the information assets, to ensure that the classification level or the label and the
security markings or indicators of the information assets are accurate, consistent, and up-to-date.
The periodic classification reviews or assessments can be triggered by the changes or the events that
affect the sensitivity, the criticality, or the value of the information assets, such as the business
needs, the legal requirements, the security incidents, or the data lifecycle. The periodic classification
reviews or assessments can help to verify, validate, or update the classification level or the label and
the security markings or indicators of the information assets, and to maintain or improve the
security protection or the handling of the information assets.
Declassify information when appropriate: This step involves declassifying or downgrading the
Page 92
www.certifiedumps.com
What does secure authentication with logging provide?
A. Data integrity
B. Access accountability
C. Encryption logging format
D. Segregation of duties
Which of the following provides the minimum set of privileges required to perform a job function
and restricts the user to a domain with the required privileges?
A. Access based on rules
B. Access based on user's role
C. Access determined by the system
D. Access based on data sensitivity
Explanation:
Secure authentication with logging provides access accountability, which means that the actions of
users can be traced and audited. Logging can help identify unauthorized or malicious activities,
enforce policies, and support investigations12
Questions & Answers PDF Page 93
information assets when appropriate, to reduce or remove the security protection or the handling
that the information assets require, based on the sensitivity, the criticality, or the value of the
information assets, and the impact or the consequence of the unauthorized or the malicious access,
disclosure, modification, or destruction of the information assets. The declassification or the
downgrade of the information assets can be triggered by the changes or the events that affect the
sensitivity, the criticality, or the value of the information assets, such as the expiration, the disposal,
the release, or the transfer of the information assets. The declassification or the downgrade of the
information assets can help to optimize, balance, or streamline the security protection or the
handling of the information assets, and to support the security,
Question: 79
Question: 80
Answer: B
Topic 11, Exam Set C
www.certifiedumps.com
Questions & Answers PDF
A. data classification labeling.
B. page views within an application.
C. authorizations granted to the user.
D. management accreditation.
Discretionary Access Control (DAC) restricts access according to
Explanation:
Discretionary Access Control (DAC) restricts access according to authorizations granted to the user.
DAC is a type of access control that allows the owner or creator of a resource to decide who can
access it and what level of access they can have. DAC uses access control lists (ACLs) to assign
permissions to resources, and users can pass or change their permissions to other users
Explanation:
Access based on user’s role provides the minimum set of privileges required to perform a job
function and restricts the user to a domain with the required privileges. This is also known as role-
based access control (RBAC), which is a method of enforcing the principle of least privilege. RBAC
assigns permissions to roles rather than individual users, and users are assigned roles based on their
responsibilities and qualifications
HOTSPOT
In the network design below, where is the MOST secure Local Area Network (LAN) segment to deploy
a Wireless Access Point (WAP) that provides contractors access to the Internet and authorized
enterprise services?
Page 94
Question: 81
Question: 82
Answer: B
Answer: C
www.certifiedumps.com
Questions & Answers PDF
Explanation:
LAN 4
The most secure LAN segment to deploy a WAP that provides contractors access to the Internet and
authorized enterprise services is LAN 4. A WAP is a device that enables wireless devices to connect to
a wired network using Wi-Fi, Bluetooth, or other wireless standards. A WAP can provide convenience
and mobility for the users, but it can also introduce security risks, such as unauthorized access,
eavesdropping, interference, or rogue access points. Therefore, a WAP should be deployed in a
secure LAN segment that can isolate the wireless traffic from the rest of the network and apply
appropriate security controls and policies. LAN 4 is connected to the firewall that separates it from
the other LAN segments and the Internet. This firewall can provide network segmentation, filtering,
and monitoring for the WAP and the wireless devices. The firewall can also enforce the access rules
Page 95
Answer:
www.certifiedumps.com
Explanation:
Questions & Answers PDF
DRAG DROP
Match the objectives to the assessment questions in the governance domain of Software Assurance
Maturity Model (SAMM).
The correct matches are as follows:
Secure Architecture -> Do you advertise shared security services with guidance for project teams?
Education & Guidance -> Are most people tested to ensure a baseline skill-set for secure
development practices?
Strategy & Metrics -> Does most of the organization know about what’s required based on risk
ratings?
Vulnerability Management -> Are most project teams aware of their security point(s) of contact and
response team(s)?
Comprehensive Explanation: These matches are based on the definitions and objectives of the four
governance domain practices in the Software Assurance Maturity Model (SAMM). SAMM is a
framework to help organizations assess and improve their software security posture. The governance
domain covers the organizational aspects of software security, such as policies, metrics, and roles.
Secure Architecture: This practice aims to provide a consistent and secure design for software
projects, as well as reusable security services and components. The assessment question measures
the availability and guidance of these shared security services for project teams.
and policies for the contractors, such as allowing them to access the Internet and some authorized
enterprise services, but not the other LAN segments that may contain sensitive or critical data or
systems34 Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and
Network Security, p. 317; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication
and Network Security, p. 437.
Page 96
Question: 83
Answer:
www.certifiedumps.com
Explanation:
What is the PRIMARY difference between security policies and security procedures?
A. Policies are used to enforce violations, and procedures create penalties
B. Policies point to guidelines, and procedures are more contractual in nature
C. Policies are included in awareness training, and procedures give guidance
D. Policies are generic in nature, and procedures contain operational details
Questions & Answers PDF Page 97
Education & Guidance: This practice aims to raise the awareness and skills of the staff involved in
software development, as well as provide them with the necessary tools and resources. The
assessment question measures the level of testing and verification of the staff’s secure development
knowledge and abilities.
Strategy & Metrics: This practice aims to define and communicate the software security strategy,
goals, and priorities, as well as measure and monitor the progress and effectiveness of software
security activities. The assessment question measures the degree of awareness and alignment of the
organization with the risk-based requirements for software security.
Vulnerability Management: This practice aims to identify and remediate the vulnerabilities in the
software products, as well as prevent or mitigate the impact of potential incidents. The assessment
question measures the level of awareness and collaboration of the project teams with the security
point(s) of contact and response team(s).
Reference: SAMM Governance Domain; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8,
page 452
Question: 84
Answer: D
www.certifiedumps.com
Which of the following is an advantage of on premise Credential Management Systems?
A. Improved credential interoperability
B. Control over system configuration
C. Lower infrastructure capital costs
D. Reduced administrative overhead
Questions & Answers PDF Page 98
The primary difference between security policies and security procedures is that policies are generic
in nature, and procedures contain operational details. Security policies are the high-level statements
or rules that define the goals, objectives, and requirements of security for an organization. Security
procedures are the low-level steps or actions that specify how to implement, enforce, and comply
with the security policies.
A. Policies are used to enforce violations, and procedures create penalties is not a correct answer, as
it confuses the roles and functions of policies and procedures. Policies are used to create penalties,
and procedures are used to enforce violations. Penalties are the consequences or sanctions that are
imposed for violating the security policies, and they are defined by the policies. Enforcement is the
process or mechanism of ensuring compliance with the security policies, and it is carried out by the
procedures.
B. Policies point to guidelines, and procedures are more contractual in nature is not a correct
answer, as it misrepresents the nature and purpose of policies and procedures. Policies are not
merely guidelines, but rather mandatory rules that bind the organization and its stakeholders to
follow the security principles and standards. Procedures are not contractual in nature, but rather
operational in nature, as they describe the specific tasks and activities that are necessary to achieve
the security goals and objectives.
C.Policies are included in awareness training, and procedures give guidance is not a correct answer, as
it implies that policies and procedures have different audiences and functions. Policies and
procedures are both included in awareness training, and they both give guidance. Awareness training
is the process of educating and informing the organization and its stakeholders about the security
policies and procedures, and their roles and responsibilities in security. Guidance is the process of
providing direction and advice on how to comply with the security policies and procedures, and how
to handle security issues and incidents.
Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 17; Official (ISC)2 CISSP CBK
Reference, Fifth Edition, Chapter 1, page 13
Question: 85
www.certifiedumps.com
Questions & Answers PDF
In order for a security policy to be effective within an organization, it MUST include
A. strong statements that clearly define the problem.
B. A list of all standards that apply to the policy.
Explanation:
The advantage of on premise credential management systems is that they provide more control over
the system configuration and customization. On premise credential management systems are the
systems that store and manage the credentials, such as usernames, passwords, tokens, or
certificates, of the users or the devices within an organization’s own network or infrastructure. On
premise credential management systems can offer more flexibility and security for the organization,
as they can tailor the system to their specific needs and requirements, and they can enforce their
own policies and standards for the credential management.
A. Improved credential interoperability is not an advantage of on premise credential management
systems, but rather an advantage of cloud-based credential management systems. Cloud-based
credential management systems are the systems that store and manage the credentials of the users
or the devices on a third-party cloud service provider’s network or infrastructure. Cloud-based
credential management systems can offer more interoperability and scalability for the organization,
as they can support different types of credentials and devices, and they can adjust to the changing
demand and workload of the credential management.
C.Lower infrastructure capital costs is not an advantage of on premise credential management
systems, but rather an advantage of cloud-based credential management systems. Cloud-based
credential management systems can reduce the infrastructure capital costs for the organization, as
they do not require the organization to purchase, install, or maintain their own hardware or software
for the credential management. Instead, the organization can pay a subscription fee or a usage fee to
the cloud service provider for the credential management service.
D.Reduced administrative overhead is not an advantage of on premise credential management
systems, but rather an advantage of cloud-based credential management systems. Cloud-based
credential management systems can reduce the administrative overhead for the organization, as
they do not require the organization to perform the tasks or the functions related to the credential
management, such as backup, recovery, patching, or updating. Instead, the cloud service provider
can handle these tasks or functions for the organization, as part of the credential management
service.
Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 346; Official (ISC)2 CISSP
CBK Reference, Fifth Edition, Chapter 6, page 307
Page 99
Question: 86
Answer: B
www.certifiedumps.com
Questions & Answers PDF
C. owner information and date of last revision.
D. disciplinary measures for non compliance.
To protect auditable information, which of the following MUST be configured to only allow read
access?
A. Logging configurations
B. Transaction log files
C. User account configurations
D. Access control lists (ACL)
Explanation: In order for a security policy to be effective within an organization, it must
include disciplinary
measures for non compliance. A security policy is a document or a statement that defines
and
communicates the security goals, the objectives, or the expectations of the organization,
and that
provides the guidance or the direction for the security activities, the processes, or the
functions of
the organization. A security policy must include disciplinary measures for non compliance,
which are
the actions or the consequences that the organization will take or impose on the users or
the devices
that violate or disregard the security policy or the security rules. Disciplinary measures for
non
compliance can help ensure the effectiveness of the security policy, as they can deter or
prevent the
users or the devices from engaging in the behaviors or the practices that could jeopardize or
undermine the security of the organization, and they can also enforce or reinforce the
accountability
or the responsibility of the users or the devices for the security of the organization.
Reference: CISSP
All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18; Official (ISC)2 CISSP CBK
Reference, Fifth
Edition, Chapter 1, page 26
Explanation:
To protect auditable information, transaction log files must be configured to only allow read access.
Transaction log files are files that record and store the details or the history of the transactions or the
activities that occur within a system or a database, such as the date, the time, the user, the action, or
the outcome. Transaction log files are important for auditing purposes, as they can provide the
evidence or the proof of the transactions or the activities that occur within a system or a database,
and they can also support the recovery or the restoration of the system or the database in case of a
failure or a corruption. To protect auditable information, transaction log files must be configured to
Page 100
Question: 87
Answer: B
Answer: D
www.certifiedumps.com
Explanation:
Questions & Answers PDF
A. Access is based on rules.
B. Access is determined by the system.
C. Access is based on user's role.
D. Access is based on data sensitivity.
Asecurity professional is asked to provide a solution that restricts a bank teller to only perform a
savings deposit transaction but allows a supervisor to perform corrections after the transaction.
Which of the following is the MOST effective solution?
only allow read access, which means that only authorized users or devices can view or access the
transaction log files, but they cannot modify, delete, or overwrite the transaction log files. This can
prevent or reduce the risk of tampering, alteration, or destruction of the auditable information, and
it can also ensure the integrity, the accuracy, or the reliability of the auditable information.
A. Logging configurations are not the files that must be configured to only allow read access to
protect auditable information, but rather the settings or the parameters that determine or control
how the logging or the recording of the transactions or the activities within a system or a database is
performed, such as the frequency, the format, the location, or the retention of the log files. Logging
configurations can affect the quality or the quantity of the auditable information, but they are not
the auditable information themselves.
C.User account configurations are not the files that must be configured to only allow read access to
protect auditable information, but rather the settings or the parameters that define or manage the
user accounts or the identities of the users or the devices that access or use a system or a database,
such as the username, the password, the role, or the permissions. User account configurations can
affect the security or the access of the system or the database, but they are not the auditable
information themselves.
D.Access control lists (ACL) are not the files that must be configured to only allow read access to
protect auditable information, but rather the data structures or the files that store and manage the
access control rules or policies for a system or a resource, such as a file, a folder, or a network. An
ACL specifies the permissions or the privileges that the users or the devices have or do not have for
the system or the resource, such as read, write, execute, or delete. ACLs can affect the security or the
access of the system or the resource, but they are not the auditable information themselves.
Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 197; Official (ISC)2 CISSP
CBK Reference, Fifth Edition, Chapter 7, page 354
Page 101
Question: 88
Answer: C
www.certifiedumps.com
Questions & Answers PDF
The most effective solution that restricts a bank teller to only perform a savings deposit transaction
but allows a supervisor to perform corrections after the transaction is that access is based on user’s
role. Access is based on user’s role is a type of access control or a protection mechanism or process
that grants or denies the access or the permission to the resources or the data within a system or a
service, based on the role or the function of the user or the device within an organization, such as
the bank teller, the supervisor, or the manager. Access is based on user’s role can provide a high level
of security or protection for the system or the service, as it can prevent or reduce the risk of
unauthorized or inappropriate access or permission to the resources or the data within the system or
the service, by the user or the device that does not have the appropriate or the necessary role or
function within the organization, such as the bank teller, the supervisor, or the manager. Access is
based on user’s role can also provide the convenience or the ease of management for the system or
the service, as it can simplify or streamline the access control or the protection mechanism or
process, by assigning or applying the predefined or the preconfigured access or permission policies
or rules to the role or the function of the user or the device within the organization, such as the bank
teller, the supervisor, or the manager, rather than to the individual or the specific user or device
within the organization, such as the John, the Mary, or the Bob. Access is based on user’s role is the
most effective solution that restricts a bank teller to only perform a savings deposit transaction but
allows a supervisor to perform corrections after the transaction, as it can ensure or maintain the
security or the quality of the transactions or the data within the system or the service, by limiting or
restricting the access or the permission to the transactions or the data within the system or the
service, based on the role or the function of the user or the device within the organization, such as
the bank teller, the supervisor, or the manager, and by allowing or enabling the different or the
additional access or permission to the transactions or the data within the system or the service, based
on the role or the function of the user or the device within the organization, such as the bank teller,
the supervisor, or the manager.
A. Access is based on rules is not the most effective solution that restricts a bank teller to only
perform a savings deposit transaction but allows a supervisor to perform corrections after the
transaction, but rather a type of access control or a protection mechanism or process that grants or
denies the access or the permission to the resources or the data within a system or a service, based
on the rules or the conditions that are defined or specified by the system or the service, or by the
administrator or the owner of the system or the service, such as the time, the location, or the
frequency. Access is based on rules can provide a moderate level of security or protection for the
system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or
permission to the resources or the data within the system or the service, by the user or the device
that does not meet or satisfy the rules or the conditions that are defined or specified by the system
or the service, or by the administrator or the owner of the system or the service, such as the time,
the location, or the frequency. However, access is based on rules is not the most effective solution
that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to
perform corrections after the transaction, as it does not take into account or consider the role or the
function of the user or the device within the organization, such as the bank teller, the supervisor, or
the manager, and as it can be complex or difficult to define or specify the rules or the conditions that
Page 102
www.certifiedumps.com
Questions & Answers PDF
are appropriate or suitable for the different or the various transactions or the data within the system
or the service, such as the savings deposit transaction, the checking withdrawal transaction, or the
loan application transaction.
B. Access is determined by the system is not the most effective solution that restricts a bank teller to
only perform a savings deposit transaction but allows a supervisor to perform corrections after the
transaction, but rather a type of access control or a protection mechanism or process that grants or
denies the access or the permission to the resources or the data within a system or a service, based
on the decision or the judgment of the system or the service, or of the algorithm or the program that
is implemented or executed by the system or the service, such as the artificial intelligence, the
machine learning, or the neural network. Access is determined by the system can provide a high
level of security or protection for the system or the service, as it can prevent or reduce the risk of
unauthorized or inappropriate access or permission to the resources or the data within the system or
the service, by the user or the device that is not approved or authorized by the system or the service,
or by the algorithm or the program that is implemented or executed by the system or the service,
such as the artificial intelligence, the machine learning, or the neural network. However, access is
determined by the system is not the most effective solution that restricts a bank teller to only
perform a savings deposit transaction but allows a supervisor to perform corrections after the
transaction, as it does not take into account or consider the role or the function of the user or the
device within the organization, such as the bank teller, the supervisor, or the manager, and as it can
be unpredictable or unreliable to rely or depend on the decision or the judgment of the system or the
service, or of the algorithm or the program that is implemented or executed by the system or the
service, such as the artificial intelligence, the machine learning, or the neural network, for the access
control or the protection mechanism or process.
D.Access is based on data sensitivity is not the most effective solution that restricts a bank teller to
only perform a savings deposit transaction but allows a supervisor to perform corrections after the
transaction, but rather a type of access control or a protection mechanism or process that grants or
denies the access or the permission to the resources or the data within a system or a service, based
on the sensitivity or the classification of the resources or the data within the system or the service,
such as the public, the confidential, or the secret. Access is based on data sensitivity can provide a
moderate level of security or protection for the system or the service, as it can prevent or reduce the
risk of unauthorized or inappropriate access or permission to the resources or the data within the
system or the service, by the user or the device that does not have the appropriate or the necessary
clearance or authorization to access or to handle the resources or the data within the system or the
service, based on the sensitivity or the classification of the resources or the data within the system or
the service, such as the public, the confidential, or the secret. However, access is based on data
sensitivity is not the most effective solution that restricts a bank teller to only perform a savings
deposit transaction but allows a supervisor to perform corrections after the transaction, as it does
not take into account or consider the role or the function of the user or the device within the
organization, such as the bank teller, the supervisor, or the manager, and as it can be complex or
difficult to define or specify the sensitivity or the classification of the resources or the data within the
Page 103
www.certifiedumps.com
Explanation:
Questions & Answers PDF
DRAG DROP
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
The correct matches are as follows:
Mandatory Access Control -> End user cannot set controls
Discretionary Access Control (DAC) -> Subject has total control over objects
Role Based Access Control (RBAC) -> Dynamically assigns permissions to particular duties based on
job function
Rule based access control -> Dynamically assigns roles to subjects based on criteria assigned by a
custodian
Explanation: The image shows a table with two columns. The left column lists four different types of
Access Control Models, and the right column lists their associated restrictions. The correct matches
are based on the definitions and characteristics of each Access Control Model, as explained below:
Mandatory Access Control (MAC) is a type of access control that grants or denies access to an object
system or the service, such as the transactions or the data that are related or relevant to the different
or the various types or categories of the accounts or the customers within the system or the service,
such as the savings account, the checking account, or the loan account, or the personal account, the
business account, or the government account.
Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 147; Official (ISC)2 CISSP
CBK Reference, Fifth Edition, Chapter 5, page 212
Page 104
Question: 89
Answer:
www.certifiedumps.com
Explanation:
Questions & Answers PDF
In the Software Development Life Cycle (SDLC), maintaining accurate hardware and software
inventories is a critical part of
A. systems integration.
B. risk management.
C. quality assurance.
D. change management.
based on the security labels of the subject and the object, and the security policy enforced by the
system. The end user cannot set or change the security labels or the policy, as they are determined
by a central authority.
Discretionary Access Control (DAC) is a type of access control that grants or denies access to an
object based on the identity and permissions of the subject, and the discretion of the owner of the
object. The subject has total control over the objects that they own, and can grant or revoke access
rights to other subjects as they wish.
Role Based Access Control (RBAC) is a type of access control that grants or denies access to an object
based on the role of the subject, and the permissions assigned to the role. The role is dynamically
assigned to the subject based on their job function, and the permissions are determined by the
business rules and policies of the organization.
Rule based access control is a type of access control that grants or denies access to an object based
on the rules or criteria that are defined by a custodian or an administrator. The rules or criteria are
dynamically applied to the subject based on their attributes, such as location, time, or device, and
the access rights are granted or revoked accordingly.
According to the CISSP CBK Official Study Guide1, the Software Development Life Cycle (SDLC) phase
that requires maintaining accurate hardware and software inventories is change management. SDLC
is a structured process that is used to design, develop, and test good-quality software. SDLC consists
of several phases or stages that cover the entire life cycle of the software, from the initial idea or
concept to the final deployment or maintenance of the software. SDLC aims to deliver high-quality,
maintainable software that meets the user’s requirements and fits within the budget and schedule of
the project. Change management is the process of controlling or managing the changes or
modifications that are made to the software or the system during the SDLC, by using or applying the
Page 105
Question: 90
Answer: D
www.certifiedumps.com
Questions & Answers PDF Page 106
appropriate methods or mechanisms, such as the policies, procedures, or tools of the project.
Change management helps to ensure the security or the integrity of the software or the system, as
well as the quality or the performance of the software or the system, by preventing or minimizing
the risks or the impacts of the changes or modifications that may affect or impair the software or the
system, such as the errors, defects, or vulnerabilities of the software or the system. Maintaining
accurate hardware and software inventories is a critical part of change management, as it provides or
supports a reliable or consistent source or basis to identify or track the hardware and software
components or elements that are involved or included in the software or the system, as well as the
changes or modifications that are made to the hardware and software components or elements
during the SDLC, such as the name, description, version, status, or value of the hardware and
software components or elements of the software or the system. Maintaining accurate hardware and
software inventories helps to ensure the security or the integrity of the software or the system, as
well as the quality or the performance of the software or the system, by enabling or facilitating the
monitoring, evaluation, or improvement of the hardware and software components or elements of
the software or the system, by using or applying the appropriate methods or mechanisms, such as
the reporting, auditing, or optimization of the hardware and software components or elements of
the software or the system. Systems integration is not the SDLC phase that requires maintaining
accurate hardware and software inventories, although it may be a benefit or a consequence of
change management. Systems integration is the process of combining or integrating the hardware
and software components or elements of the software or the system, by using or applying the
appropriate methods or mechanisms, such as the interfaces, protocols, or standards of the project.
Systems integration helps to ensure the functionality or the interoperability of the software or the
system, as well as the compatibility or the consistency of the hardware and software components or
elements of the software or the system, by ensuring or verifying that the hardware and software
components or elements of the software or the system work or operate together or with other
systems or networks, as intended or expected by the user or the client of the software or the system.
Systems integration may be a benefit or a consequence of change management, as change
management may provide or support a framework or a guideline to perform or conduct the systems
integration, by controlling or managing the changes or modifications that are made to the hardware
and software components or elements of the software or the system, as well as by maintaining
accurate hardware and software inventories of the software or the system. However, systems
integration is not the SDLC phase that requires maintaining accurate hardware and software
inventories, as it is not the main or the most important objective or purpose of systems integration,
which is to combine or integrate the hardware and software components or elements of the software
or the system. Risk management is not the SDLC phase that requires maintaining accurate hardware
and software inventories, although it may be a benefit or a consequence of change management.
Risk management is the process of identifying, analyzing, evaluating, and treating the risks or the
uncertainties that may affect or impair the software or the system, by using or applying the
appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Risk
management helps to ensure the security or the integrity of the software or the system, as well as
the quality or the performance of the software or the system, by preventing or minimizing the
impact or the consequence of the risks or the uncertainties that may harm or damage the software
www.certifiedumps.com
Explanation:
Which of the following is considered a secure coding practice?
A. Use concurrent access for shared variables and resources
B. Use checksums to verify the integrity of libraries
C. Use new code for common tasks
D. Use dynamic execution functions to pass user supplied data
Questions & Answers PDF Page 107
or the system, such as the threats, attacks, or incidents of the software or the system. Risk
management may be a benefit or a consequence of change management, as change management
may provide or support a framework or a guideline to perform or conduct the risk management, by
controlling or managing the changes or modifications that are made to the software or the system, as
well as by maintaining accurate hardware and software inventories of the software or the system.
However, risk management is not the SDLC phase that requires maintaining accurate hardware and
software inventories, as it is not the main or the most important objective or purpose of risk
management, which is to identify, analyze, evaluate, and treat the risks or the uncertainties of the
software or the system. Quality assurance is not the SDLC phase that requires maintaining accurate
hardware and software inventories, although it may be a benefit or a consequence of change
management. Quality assurance is the process of ensuring or verifying the quality or the
performance of the software or the system, by using or applying the appropriate methods or
mechanisms, such as the standards, criteria, or metrics of the project. Quality assurance helps to
ensure the security or the integrity of the software or the system, as well as the quality or the
performance of the software or the system, by preventing or detecting the errors, defects, or
vulnerabilities of the software or the system, by using or applying the appropriate methods or
mechanisms, such as the testing, validation, or verification of the software or the system. Quality
assurance may be a benefit or a consequence of change management, as change management may
provide or support a framework or a guideline to perform or conduct the quality assurance, by
controlling or managing the changes or modifications that are made to the software or the system, as
well as by maintaining accurate hardware and software inventories of the software or the
system. However, quality assurance is not the SDLC phase that requires maintaining accurate
hardware and software inventories, as it is not the main or the most important objective or purpose
of quality assurance, which is to ensure or verify the quality or the performance of the software or
the system.
Topic 12, New Questions B
Question: 91
Answer: B
www.certifiedumps.com
As part of the security assessment plan, the security professional has been asked to use a negative
testing strategy on a new website. Which of the following actions would be performed?
A. Use a web scanner to scan for vulnerabilities within the website.
B. Perform a code review to ensure that the database references are properly addressed.
C. Establish a secure connection to the web server to validate that only the approved ports are open.
D. Enter only numbers in the web form and verify that the website prompts the user to enter a valid
input.
Explanation:
A negative testing strategy is a type of software testing that aims to verify how the system handles
invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify
potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or
usability of the system. One example of a negative testing strategy is to enter only numbers in a web
form that expects a text input, such as a name or an email address, and verify that the website
prompts the user to enter a valid input. This can help ensure that the website has proper input
validation and error handling mechanisms, and that it does not accept or process any malicious or
malformed data. A web scanner, a code review, and a secure connection are not examples of a
negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system.
Questions & Answers PDF Page 108
A secure coding practice is a technique or guideline that aims to prevent or mitigate common
software vulnerabilities and ensure the quality, reliability, and security of software applications. One
example of a secure coding practice is to use checksums to verify the integrity of libraries. A
checksum is a value that is derived from applying a mathematical function or algorithm to a data set,
such as a file or a message. A checksum can be used to detect any changes or errors in the data, such
as corruption, modification, or tampering. Libraries are collections of precompiled code or functions
that can be reused by software applications. Libraries can be static or dynamic, depending on
whether they are linked to the application at compile time or run time. Libraries can be vulnerable to
attacks such as code injection, code substitution, or code reuse, where an attacker can alter or
replace the library code with malicious code. By using checksums to verify the integrity of libraries, a
software developer can ensure that the libraries are authentic and have not been compromised or
corrupted. Checksums can also help to identify and resolve any errors or inconsistencies in the
libraries. Other examples of secure coding practices are to use strong data types, input validation,
output encoding, error handling, encryption, and code review.
Question: 92
Question: 93
Answer: D
www.certifiedumps.com
A. Memory review
B. Code review
C. Message division
D. Buffer division
Explanation:
Code review is the technique that would minimize the ability of an attacker to exploit a buffer
overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to
a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the
return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious
code or data into the buffer, and altering the execution flow of the program to execute the malicious
code or data. Code review is the technique that would minimize the ability of an attacker to exploit a
buffer overflow, as it involves examining the source code of the program to identify and fix any
errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help
to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not
perform any boundary checking on the buffer, and replace them with safer or more secure
alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to
the buffer. Code review can also help to enforce and verify the use of secure coding practices and
standards, such as input validation, output encoding, error handling, or memory management, that
can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message
division, and buffer division are not techniques that would minimize the ability of an attacker to
exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a
technique, but a process of analyzing the memory layout or content of a program, such as the stack,
the heap, or the registers, to understand or debug its behavior or performance. Memory review may
help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or
mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or
fixed-size segments or blocks, such as in cryptography or networking. Message division may help to
improve the security or efficiency of the message transmission or processing, but it does not prevent
or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into
smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the
memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow.
Which of the following is the GREATEST benefit of implementing a Role Based Access Control (RBAC)
Questions & Answers PDF Page 109
Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow?
Question: 94
Answer: B
www.certifiedumps.com
Questions & Answers PDF
system?
A. Integration using Lightweight Directory Access Protocol (LDAP)
B. Form-based user registration process
C. Integration with the organizations Human Resources (HR) system
D. A considerably simpler provisioning process
Explanation:
The greatest benefit of implementing a Role Based Access Control (RBAC) system is a considerably
simpler provisioning process. Provisioning is the process of creating, modifying, or deleting the user
accounts and access rights on a system or a network. Provisioning can be a complex and tedious task,
especially in large or dynamic organizations that have many users, systems, and resources. RBAC is a
type of access control model that assigns permissions to users based on their roles or functions
within the organization, rather than on their individual identities or attributes. RBAC can simplify the
provisioning process by reducing the administrative overhead and ensuring the consistency and
accuracy of the user accounts and access rights. RBAC can also provide some benefits for security,
such as enforcing the principle of least privilege, facilitating the separation of duties, and supporting
the audit and compliance activities. Integration using Lightweight Directory Access Protocol (LDAP),
form-based user registration process, and integration with the organizations Human Resources (HR)
system are not the greatest benefits of implementing a RBAC system, although they may be related
or useful features. Integration using LDAP is a technique that uses a standard protocol to
communicate and exchange information with a directory service, such as Active Directory or
OpenLDAP. LDAP can provide some benefits for access control, such as centralizing and standardizing
the user accounts and access rights, supporting the authentication and authorization mechanisms,
and enabling the interoperability and scalability of the systems or the network. However, integration
using LDAP is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be
used with other access control models, such as discretionary access control (DAC) or mandatory
access control (MAC). Form-based user registration process is a technique that uses a web-based
form to collect and validate the user information and preferences, such as name, email, password, or
role. Form-based user registration process can provide some benefits for access control, such as
simplifying and automating the user account creation, enhancing the user experience and
satisfaction, and supporting the self-service and delegation capabilities. However, form-based user
registration process is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can
be used with other access control models, such as DAC or MAC. Integration with the organizations HR
system is a technique that uses a software application or a service to synchronize and update the user
accounts and access rights with the HR data, such as employee records, job titles, or organizational
units. Integration with the organizations HR system can provide some benefits for access control,
such as streamlining and automating the provisioning process, improving the accuracy and timeliness
of the user accounts and access rights, and supporting the identity lifecycle management activities.
Page 110
Answer: D
www.certifiedumps.com
Which of the following combinations would MOST negatively affect availability?
A. Denial of Service (DoS) attacks and outdated hardware
B. Unauthorized transactions and outdated hardware
C. Fire and accidental changes to data
D. Unauthorized transactions and denial of service attacks
Explanation:
The combination that would most negatively affect availability is denial of service (DoS) attacks and
outdated hardware. Availability is the property or the condition of a system or a network to be
accessible and usable by the authorized users or customers, whenever and wherever they need it.
Availability can be measured by various metrics, such as uptime, downtime, response time, or
reliability. Availability can be affected by various factors, such as hardware, software, network,
human, or environmental factors. Denial of service (DoS) attacks and outdated hardware are two
factors that can negatively affect availability, as they can cause or contribute to the following
consequences:
Denial of service (DoS) attacks are malicious attacks that aim to disrupt or degrade the availability of
a system or a network, by overwhelming or exhausting its resources, such as bandwidth, memory, or
processing power, with a large number or a high frequency of requests or packets. Denial of service
(DoS) attacks can prevent or delay the legitimate users or customers from accessing or using the
system or the network, and they can cause errors, failures, or crashes to the system or the network.
Outdated hardware are hardware components that are old, obsolete, or unsupported, and that do
not meet the current or the expected requirements or standards of the system or the network, such
as performance, functionality, or security. Outdated hardware can reduce or limit the availability of
the system or the network, as they can cause malfunctions, breakdowns, or incompatibilities to the
system or the network, and they can be difficult or costly to maintain, repair, or replace.
The combination of denial of service (DoS) attacks and outdated hardware would most negatively
affect availability, as they can have a synergistic or a cumulative effect on the system or the network,
and they can exacerbate or amplify each other’s impact. For example, denial of service (DoS) attacks
can exploit or target the vulnerabilities or the weaknesses of the outdated hardware, and they can
Questions & Answers PDF Page 111
However, integration with the organizations HR system is not a benefit of RBAC, as it is not a feature
or a requirement of RBAC, and it can be used with other access control models, such as DAC or
MAC.
Question: 95
Answer: A
www.certifiedumps.com
Questions & Answers PDF
cause more damage or disruption to the system or the network. Outdated hardware can increase or
prolong the susceptibility or the recovery of the system or the network to the denial of service (DoS)
attacks, and they can reduce or hinder the resilience or the mitigation of the system or the network
to the denial of service (DoS) attacks. Unauthorized transactions and outdated hardware, fire and
accidental changes to data, and unauthorized transactions and denial of service attacks are not the
combinations that would most negatively affect availability, although they may be related or possible
combinations. Unauthorized transactions and outdated hardware are two factors that can negatively
affect the confidentiality and the integrity of the data, rather than the availability of the system or
the network, as they can cause or contribute to the following consequences:
Unauthorized transactions are malicious or improper activities that involve accessing, modifying, or
transferring the data on a system or a network, without the permission or the consent of the owner
or the custodian of the data, such as theft, fraud, or sabotage. Unauthorized transactions can
compromise or damage the confidentiality and the integrity of the data, as they can expose or
disclose the data to unauthorized parties, or they can alter or destroy the data.
Outdated hardware are hardware components that are old, obsolete, or unsupported, and that do
not meet the current or the expected requirements or standards of the system or the network, such
as performance, functionality, or security. Outdated hardware can compromise or damage the
confidentiality and the integrity of the data, as they can be vulnerable or susceptible to attacks or
errors, or they can be incompatible or inconsistent with the data.
Fire and accidental changes to data are two factors that can negatively affect the availability and the
integrity of the data, rather than the availability of the system or the network, as they can cause or
contribute to the following consequences:
Fire is a physical or an environmental hazard that involves the combustion or the burning of a
material or a substance, such as wood, paper, or plastic, and that produces heat, light, or smoke. Fire
can damage or destroy the availability and the integrity of the data, as it can consume or melt the
physical media or devices that store the data, such as hard disks, tapes, or CDs, or it can corrupt or
erase the data on the media or devices.
Accidental changes to data are human or operational errors that involve modifying or altering the
data on a system or a network, without the intention or the awareness of the user or the operator,
such as typos, misconfigurations, or overwrites. Accidental changes to data can damage or destroy
the availability and the integrity of the data, as they can make the data inaccessible or unusable, or
they can make the data inaccurate or unreliable.
Unauthorized transactions and denial of service attacks are two factors that can negatively affect the
confidentiality and the availability of the system or the network, rather than the availability of the
system or the network, as they can cause or contribute to the following consequences:
Unauthorized transactions are malicious or improper activities that involve accessing, modifying, or
transferring the data on a system or a network, without the permission or the consent of the owner
Page 112
www.certifiedumps.com
Questions & Answers PDF
Which of the following is a characteristic of an internal audit?
A. An internal audit is typically shorter in duration than an external audit.
B. The internal audit schedule is published to the organization well in advance.
C. The internal auditor reports to the Information Technology (IT) department
D. Management is responsible for reading and acting upon the internal audit results
or the custodian of the data, such as theft, fraud, or sabotage. Unauthorized transactions can
compromise or damage the confidentiality and the availability of the system or the network, as they
can expose or disclose the data to unauthorized parties, or they can consume or divert the resources
of the system or the network.
Denial of service (DoS) attacks are malicious attacks that aim to disrupt or degrade the availability of
a system or a network, by overwhelming or exhausting its resources, such as bandwidth, memory, or
processing power, with a large number or a high frequency of requests or packets. Denial of service
(DoS) attacks can compromise or damage the confidentiality and the availability of the system or the
network, as they can prevent or delay the legitimate users or customers from accessing or using the
system or the network, and they can cause errors, failures, or crashes to the system or the network.
Explanation:
A characteristic of an internal audit is that management is responsible for reading and acting upon
the internal audit results. An internal audit is an independent and objective evaluation or assessment
of the internal controls, processes, or activities of an organization, performed by a group of auditors
or professionals who are part of the organization, such as the internal audit department or the audit
committee. An internal audit can provide some benefits for security, such as enhancing the accuracy
and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit
and the compliance activities. An internal audit can involve various steps and roles, such as:
Planning, which is the preparation or the design of the internal audit, by the internal auditor or the
audit team, who are responsible for conducting or performing the internal audit. Planning includes
defining the objectives, scope, criteria, and methodology of the internal audit, as well as identifying
and analyzing the risks and the stakeholders of the internal audit.
Execution, which is the implementation or the performance of the internal audit, by the internal
auditor or the audit team, who are responsible for collecting and evaluating the evidence or the data
related to the internal audit, using various tools and techniques, such as interviews, observations,
tests, or surveys.
Page 113
Question: 96
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Proven application security principles include which of the following?
A. Minimizing attack surface area
B. Hardening the network perimeter
C. Accepting infrastructure security controls
D. Developing independent modules
Reporting, which is the communication or the presentation of the internal audit results, by the
internal auditor or the audit team, who are responsible for preparing and delivering the internal
audit report, which contains the findings, conclusions, and recommendations of the internal audit, to
the management or the audit committee, who are the primary users or recipients of the internal
audit report.
Follow-up, which is the verification or the validation of the internal audit results, by the management
or the audit committee, who are responsible for reading and acting upon the internal audit report, as
well as by the internal auditor or the audit team, who are responsible for monitoring and reviewing
the actions taken by the management or the audit committee, based on the internal audit report.
Management is responsible for reading and acting upon the internal audit results, as they are the
primary users or recipients of the internal audit report, and they have the authority and the
accountability to implement or execute the recommendations or the improvements suggested by
the internal audit report, as well as to report or disclose the internal audit results to the external
parties, such as the regulators, the shareholders, or the customers. An internal audit is typically
shorter in duration than an external audit, the internal audit schedule is published to the
organization well in advance, and the internal auditor reports to the audit committee are not
characteristics of an internal audit, although they may be related or possible aspects of an internal
audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a
group of auditors or professionals who are part of the organization, and who have more familiarity
and access to the internal controls, processes, or activities of the organization, compared to a group
of auditors or professionals who are outside the organization, and who have less familiarity and
access to the internal controls, processes, or activities of the organization. However, an internal audit
is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is
not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type
or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the
internal audit. The internal audit schedule is published to the organization well in advance, as it is a
good practice or a technique that can help to ensure the transparency and the accountability of the
internal audit, as well as to facilitate the coordination and the cooperation of the internal audit
stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
Page 114
Question: 97
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Minimizing attack surface area is a proven application security principle that aims to reduce the
exposure or the vulnerability of an application to potential attacks, by limiting or eliminating the
unnecessary or unused features, functions, or services of the application, as well as the access or the
interaction of the application with other applications, systems, or networks. Minimizing attack
surface area can provide some benefits for security, such as enhancing the performance and the
functionality of the application, preventing or mitigating some types of attacks or vulnerabilities, and
supporting the audit and the compliance activities. Hardening the network perimeter, accepting
infrastructure security controls, and developing independent modules are not proven application
security principles, although they may be related or useful concepts or techniques. Hardening the
network perimeter is a network security concept or technique that aims to protect the network from
external or unauthorized attacks, by strengthening or enhancing the security controls or mechanisms
at the boundary or the edge of the network, such as firewalls, routers, or gateways. Hardening the
network perimeter can provide some benefits for security, such as enhancing the performance and
the functionality of the network, preventing or mitigating some types of attacks or vulnerabilities,
and supporting the audit and the compliance activities. However, hardening the network perimeter
is not an application security principle, as it is not specific or applicable to the application layer, and it
does not address the internal or the inherent security of the application. Accepting infrastructure
security controls is a risk management concept or technique that involves accepting the residual risk
of an application after applying the security controls or mechanisms provided by the underlying
infrastructure, such as the hardware, the software, the network, or the cloud. Accepting
infrastructure security controls can provide some benefits for security, such as reducing the cost and
the complexity of the security implementation, leveraging the expertise and the resources of the
infrastructure providers, and supporting the audit and the compliance activities. However, accepting
infrastructure security controls is not an application security principle, as it is not a proactive or a
preventive measure to enhance the security of the application, and it may introduce or increase the
dependency or the vulnerability of the application on the infrastructure. Developing independent
modules is a software engineering concept or technique that involves designing or creating the
application as a collection or a composition of discrete or separate components or units, each with a
specific function or purpose, and each with a well-defined interface or contract. Developing
independent modules can provide some benefits for security, such as enhancing the usability and the
maintainability of the application, preventing or isolating some types of errors or bugs, and
supporting the testing and the verification activities. However, developing independent modules is
not an application security principle, as it is not a direct or a deliberate measure to improve the
security of the application, and it may not address or prevent some types of attacks or vulnerabilities
that affect the application as a whole or the interaction between the modules.
Page 115
Question: 98
Answer: A
www.certifiedumps.com
Explanation:
When developing a business case for updating a security program, the security program owner must
identify relevant metrics that can help to measure and evaluate the performance and the
effectiveness of the security program, as well as to justify and support the investment and the return
of the security program. A business case is a document or a presentation that provides the rationale
or the argument for initiating or continuing a project or a program, such as a security program, by
analyzing and comparing the costs and the benefits, the risks and the opportunities, and the
alternatives and the recommendations of the project or the program. A business case can provide
some benefits for security, such as enhancing the visibility and the accountability of the security
program, preventing or detecting any unauthorized or improper activities or changes, and
supporting the audit and the compliance activities. A business case can involve various elements and
steps, such as:
Problem statement, which is the description or the definition of the problem or the issue that the
project or the program aims to solve or address, such as a security gap, a security threat, or a security
requirement.
Solution proposal, which is the explanation or the demonstration of the solution or the approach that
the project or the program offers or adopts to solve or address the problem or the issue, such as a
security tool, a security process, or a security standard.
Cost-benefit analysis, which is the calculation or the estimation of the costs and the benefits of the
project or the program, both in quantitative and qualitative terms, such as the financial, operational,
or strategic costs and benefits, and the comparison or the evaluation of the costs and the benefits, to
determine the feasibility and the viability of the project or the program.
Risk assessment, which is the identification and the analysis of the risks or the uncertainties that may
affect the project or the program, both in positive and negative terms, such as the threats,
vulnerabilities, or opportunities, and the estimation or the evaluation of the likelihood and the
impact of the risks, to determine the severity and the priority of the risks, and to develop or
implement the risk mitigation or the risk management strategies or actions.
Questions & Answers PDF Page 116
When developing a business case for updating a security program, the security program owner
MUST do which of the following?
A. Identify relevant metrics
B. Prepare performance test reports
C. Obtain resources for the security program
D. Interview executive management
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Alternative analysis, which is the identification and the analysis of the alternative or the comparable
solutions or approaches that may solve or address the problem or the issue, other than the proposed
solution or approach, such as the existing or the available solutions or approaches, or the do-nothing
or the status-quo option, and the comparison or the evaluation of the alternative solutions or
approaches, to determine the advantages and the disadvantages, the strengths and the weaknesses,
and the pros and the cons of each alternative solution or approach.
Recommendation, which is the suggestion or the endorsement of the best or the preferred solution
or approach that can solve or address the problem or the issue, based on the results or the outcomes
of the previous elements or steps, such as the cost-benefit analysis, the risk assessment, or the
alternative analysis, and the justification or the support of the recommendation, by providing the
evidence or the data that can validate or verify the recommendation.
Identifying relevant metrics is a key element or step of developing a business case for updating a
security program, as it can help to measure and evaluate the performance and the effectiveness of
the security program, as well as to justify and support the investment and the return of the security
program. Metrics are measures or indicators that can quantify or qualify the attributes or the
outcomes of a process or an activity, such as the security program, and that can provide the
information or the feedback that can facilitate the decision making or the improvement of the
process or the activity. Metrics can provide some benefits for security, such as enhancing the
accuracy and the reliability of the security program, preventing or detecting fraud or errors, and
supporting the audit and the compliance activities. Identifying relevant metrics can involve various
tasks or duties, such as:
Defining and documenting the objectives, scope, criteria, and methodology of the metrics, and
ensuring that they are consistent and aligned with the business case and the security program.
Selecting and collecting the data or the evidence that are related to the metrics, using various tools
and techniques, such as surveys, interviews, tests, or audits.
Analyzing and interpreting the data or the evidence that are related to the metrics, using various
methods and models, such as statistical, mathematical, or graphical methods or models.
Reporting and communicating the results or the findings of the metrics, using various formats and
channels, such as reports, dashboards, or presentations.
Preparing performance test reports, obtaining resources for the security program, and interviewing
executive management are not the tasks or duties that the security program owner must do when
developing a business case for updating a security program, although they may be related or
possible tasks or duties. Preparing performance test reports is a task or a technique that can be used
by the security program owner, the security program team, or the security program auditor, to verify
or validate the functionality and the quality of the security program, according to the standards and
the criteria of the security program, and to detect and report any errors, bugs, or vulnerabilities in
the security program. Obtaining resources for the security program is a task or a technique that can
Page 117
www.certifiedumps.com
Explanation:
Transport Layer Security (TLS) provides peer identity authentication as one of its capabilities for a
remote access server. TLS is a cryptographic protocol that provides secure communication over a
network. It operates at the transport layer of the OSI model, between the application layer and the
network layer. TLS uses asymmetric encryption to establish a secure session key between the client
and the server, and then uses symmetric encryption to encrypt the data exchanged during the
session. TLS also uses digital certificates to verify the identity of the client and the server, and to
prevent impersonation or spoofing attacks. This process is known as peer identity authentication,
and it ensures that the client and the server are communicating with the intended parties and not
with an attacker. TLS also provides other capabilities for a remote access server, such as data
integrity, confidentiality, and forward secrecy. Reference: Enable TLS 1.2 on servers - Configuration
Manager; How to Secure Remote Desktop Connection with TLS 1.2. - Microsoft Q&A; Enable remote
access from intranet with TLS/SSL certificate (Advanced …
Transport Layer Security (TLS) provides which of the following capabilities for a remote access server?
A. Transport layer handshake compression
B. Application layer negotiation
C. Peer identity authentication
D. Digital certificate revocation
A chemical plan wants to upgrade the Industrial Control System (ICS) to transmit data using
Ethernet instead of RS422. The project manager wants to simplify administration and
maintenance by utilizing the office network infrastructure and staff to implement this
upgrade.
Questions & Answers PDF Page 118
be used by the security program owner, the security program sponsor, or the security program
manager, to acquire or allocate the necessary or the sufficient resources for the security program,
such as the financial, human, or technical resources, and to manage or optimize the use or the
distribution of the resources for the security program. Interviewing executive management is a task
or a technique that can be used by the security program owner, the security program team, or the
security program auditor, to collect and analyze the information and the feedback about the security
program, from the executive management, who are the primary users or recipients of the security
program, and who have the authority and the accountability to implement or execute the security
program.
Question: 99
Question: 100
Answer: C
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the GREATEST impact on security for the network?
A. The network administrators have no knowledge of ICS
B. The ICS is now accessible from the office network
C. The ICS does not support the office password policy
D. RS422 is more reliable than Ethernet
Explanation: The greatest impact on security for the network is that the ICS is now
accessible from the office
network. This means that the ICS is exposed to more potential threats and vulnerabilities
from the
internet and the office network, such as malware, unauthorized access, data leakage, or
denial-of-
service attacks. The ICS may also have different security requirements and standards than
the office
network, such as availability, reliability, and safety. Therefore, connecting the ICS to the
office
network increases the risk of compromising the confidentiality, integrity, and availability
of the ICS
and the critical infrastructure it controls. The other options are not as significant as the
increased
attack surface and complexity of the network. Reference: Guide to Industrial Control
Systems (ICS)
Security | NIST, page 2-1; Industrial Control Systems | Cybersecurity and Infrastructure
Security
Agency, page 1.
What does a Synchronous (SYN) flood attack do?
A. Forces Transmission Control Protocol /Internet Protocol (TCP/IP) connections into a reset state
B. Establishes many new Transmission Control Protocol / Internet Protocol (TCP/IP) connections
C. Empties the queue of pending Transmission Control Protocol /Internet Protocol (TCP/IP) requests
D. Exceeds the limits for new Transmission Control Protocol /Internet Protocol (TCP/IP) connections
Explanation:
A SYN flood attack does exceed the limits for new TCP/IP connections. A SYN flood attack is a type of
denial-of-service attack that sends a large number of SYN packets to a server, without completing the
TCP three-way handshake. The server allocates resources for each SYN packet and waits for the final
ACK packet, which never arrives. This consumes the server’s memory and processing power, and
prevents it from accepting new legitimate connections. The other options are not accurate
descriptions of what a SYN flood attack does. Reference: SYN flood - Wikipedia; SYN flood DDoS
attack | Cloudflare.
Page 119
Question: 101
Answer: B
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Access to which of the following is required to validate web session management?
A. Log timestamp
B. Live session traffic
C. Session state variables
D. Test scripts
Which of the following is the BEST metric to obtain when gaining support for an Identify and Access
Management (IAM) solution?
A. Application connection successes resulting in data leakage
B. Administrative costs for restoring systems after connection failure
C. Employee system timeouts from implementing wrong limits
Explanation:
Access to session state variables is required to validate web session management. Web session
management is the process of maintaining the state and information of a user across multiple
requests and interactions with a web application. Web session management relies on session state
variables, which are data elements that store the user’s preferences, settings, authentication status,
and other relevant information for the duration of the session. Session state variables can be stored
on the client side (such as cookies or local storage) or on the server side (such as databases or files).
To validate web session management, it is necessary to access the session state variables and verify
that they are properly generated, maintained, and destroyed by the web application. This can help to
ensure the security, functionality, and performance of the web application and the user experience.
The other options are not required to validate web session management. Log timestamp is a data
element that records the date and time of a user’s activity or event on the web application, but it
does not store the user’s state or information. Live session traffic is the network data that is
exchanged between the user and the web application during the session, but it does not reflect the
session state variables that are stored on the client or the server side. Test scripts are code segments
that are used to automate the testing of the web application’s features and functions, but they do
not access the session state variables directly. Reference: Session Management - OWASP Cheat Sheet
Series; Session Management: An Overview | SecureCoding.com; Session Management in HTTP -
GeeksforGeeks.
Page 120
Question: 102
Question: 103
Answer: C
www.certifiedumps.com
Questions & Answers PDF
D. Help desk costs required to support password reset requests
What is the second step in the identity and access provisioning lifecycle?
A. Provisioning
B. Review
C. Approval
D. Revocation
Explanation:
The identity and access provisioning lifecycle is the process of managing the creation, modification,
and termination of user accounts and access rights in an organization. The second step in this
lifecycle is approval, which means that the identity and access requests must be authorized by the
appropriate managers or administrators before they are implemented. Approval ensures that the
principle of least privilege is followed and that only authorized users have access to the required
resources.
Explanation:
Identify and Access Management (IAM) is the process of managing the identities and access
rights of
users and devices in an organization. IAM solutions can provide various benefits, such as
improving
security, compliance, productivity, and user experience. However, implementing an IAM
solution
may also require significant investment and resources, and therefore, it is important to
obtain
support from the stakeholders and decision-makers. One of the best metrics to obtain
when gaining
support for an IAM solution is the help desk costs required to support password reset
requests. This
metric can demonstrate the following advantages of an IAM solution: Reducing the workload
and expenses of the help desk staff, who often spend a large amount of time
and money on handling password reset requests from users who forget or lose their
passwords.
Enhancing the security and compliance of the organization, by reducing the risks of
unauthorized
access, identity theft, phishing, and credential compromise, which can result from weak or
shared
passwords, or passwords that are not changed frequently or securely. Improving the
productivity and user experience of the users, by enabling them to reset their own
passwords quickly and easily, without having to contact the help desk or wait for a
response. This can
also reduce the downtime and frustration of the users, and increase their satisfaction and
loyalty.
Page 121
Question: 104
Answer: C
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which of the following would BEST support effective testing of patch compatibility when patches
are applied to an organization’s systems?
A. Standardized configurations for devices
B. Standardized patch testing equipment
C. Automated system patching
D. Management support for patching
An international medical organization with headquarters in the United States (US) and
branches in France wants to test a drug in both countries. What is the organization
allowed to do with the test subject’s data?
A. Aggregate it into one database in the US
B. Process it in the US, but store the information in France
C. Share it with a third party
D. Anonymize it and process it in the US
Explanation:
Anonymizing the test subject’s data means removing or masking any personally identifiable
information (PII) that could be used to identify or trace the individual. This can help to protect the
privacy and confidentiality of the test subjects, as well as comply with the data protection laws and
regulations of both countries. Processing the anonymized data in the US can also help to reduce the
Explanation: Standardized configurations for devices can help to reduce the complexity and
variability of the
systems that need to be patched, and thus facilitate the testing of patch compatibility.
Standardized
configurations can also help to ensure that the patches are applied consistently and
correctly across
the organization. Standardized patch testing equipment, automated system patching, and
management support for patching are also important factors for effective patch
management, but
they are not directly related to testing patch compatibility. Reference: CISSP All-in-One
Exam Guide,
Eighth Edition, Chapter 5: Security Engineering, page 605; Official (ISC)2 Guide to the CISSP
CBK, Fifth
Edition, Chapter 3: Security Architecture and Engineering, page 386.
Page 122
Question: 105
Question: 106
Answer: A
Answer: D
www.certifiedumps.com
It is MOST important to perform which of the following to minimize potential impact when
implementing a new vulnerability scanning tool in a production environment?
A. Negotiate schedule with the Information Technology (IT) operation’s team
B. Log vulnerability summary reports to a secured server
C. Enable scanning during off-peak hours
D. Establish access for Information Technology (IT) management
Due to system constraints, a group of system administrators must share a high-level access set of
credentials. Which of the following would be MOST appropriate to implement?
A. Increased console lockout times for failed logon attempts
B. Reduce the group in size
C. A credential check-out process for a per-use basis
D. Full logging on affected systems
Explanation:
It is most important to perform a schedule negotiation with the IT operation’s team to minimize the
potential impact when implementing a new vulnerability scanning tool in a production environment.
This is because a vulnerability scan can cause network congestion, performance degradation, or
system instability, which can affect the availability and functionality of the production systems.
Therefore, it is essential to coordinate with the IT operation’s team to determine the best time and
frequency for the scan, as well as the scope and intensity of the scan. Logging vulnerability summary
reports, enabling scanning during off-peak hours, and establishing access for IT management are also
good practices for vulnerability scanning, but they are not as important as negotiating the schedule
with the IT operation’s team. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7:
Security Assessment and Testing, page 858; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition,
Chapter 6: Security Assessment and Testing, page 794.
Questions & Answers PDF Page 123
costs and risks of transferring the data across borders. Aggregating the data into one database in the
US, processing it in the US but storing it in France, or sharing it with a third party could all pose
potential privacy and security risks, as well as legal and ethical issues, for the organization and the
test subjects. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk
Management, page 67; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and
Risk Management, page 62.
Question: 107
Question: 108
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the MOST efficient mechanism to account for all staff during a speedy
nonemergency evacuation from a large security facility?
A. Large mantrap where groups of individuals leaving are identified using facial recognition
technology
B. Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each
exitdoor
C. Emergency exits with push bars with coordinates at each exit checking off the individual against a
predefined list
D. Card-activated turnstile where individuals are validated upon exit
Explanation: The most appropriate measure to implement when a group of system
administrators must share a
high-level access set of credentials due to system constraints is a credential check-out
process for a
per-use basis. This means that the system administrators must request and obtain the
credentials
from a secure source each time they need to use them, and return them after they finish
their tasks.
This can help to reduce the risk of unauthorized access, misuse, or compromise of the
credentials,
as well as to enforce accountability and traceability of the system administrators’ actions.
Increasing
console lockout times, reducing the group size, and enabling full logging are not as effective
as a
credential check-out process, as they do not address the root cause of the problem, which is
the
sharing of the credentials. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter
5: Security
Engineering, page 633; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3:
Security
Architecture and Engineering, page 412.
Explanation:
Using RFID sensors worn by employees is the most efficient mechanism for accounting for staff during a
speedy non-emergency evacuation in a large facility. RFID systems allow automatic, real-time tracking
without manual intervention. The sensors can quickly and accurately identify when employees pass
through exit points, ensuring that everyone is accounted for without delays.
Page 124
Question: 109
Question: 110
Answer: C
Answer: B
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the MOST challenging issue in apprehending cyber criminals?
A. They often use sophisticated method to commit a crime.
B. It is often hard to collect and maintain integrity of digital evidence.
C. The crime is often committed from a different jurisdiction.
D. There is often no physical evidence involved.
Explanation:
Code quality, security, and origin are important criteria when designing procedures and acceptance
criteria for acquired software. Code quality refers to the degree to which the software meets the
functional and nonfunctional requirements, as well as the standards and best practices for coding.
Security refers to the degree to which the software protects the confidentiality, integrity, and
availability of the data and the system. Origin refers to the source and ownership of the software, as
well as the licensing and warranty terms. Architecture, hardware, and firmware are not criteria for
Which of the following are important criteria when designing procedures and acceptance criteria for
acquired software?
A. Code quality, security, and origin
B. Architecture, hardware, and firmware
C. Data quality, provenance, and scaling
D. Distributed, agile, and bench testing
Explanation:
The most challenging issue in apprehending cyber criminals is that the crime is often committed
from a different jurisdiction. This means that the cyber criminals may operate from a different
country or region than the victim or the target, and thus may be subject to different laws,
regulations, and enforcement agencies. This can create difficulties and delays in identifying, locating,
and prosecuting the cyber criminals, as well as in obtaining and preserving the digital evidence. The
other issues, such as the sophistication of the methods, the integrity of the evidence, and the lack of
physical evidence, are also challenges in apprehending cyber criminals, but they are not as significant
as the jurisdiction issue. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security
Operations, page 475; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication
and Network Security, page 544.
Page 125
Question: 111
Answer: C
Answer: A
www.certifiedumps.com
Questions & Answers PDF
What is the PRIMARY role of a scrum master in agile development?
A. To choose the primary development language
Explanation: The first step when purchasing Commercial Off-The-Shelf (COTS) software is to
establish policies and
procedures on system and services acquisition. This involves defining the objectives, scope,
and
criteria for acquiring the software, as well as the roles and responsibilities of the
stakeholders
involved in the acquisition process. The policies and procedures should also address the
legal,
contractual, and regulatory aspects of the acquisition, such as the terms and conditions,
the service
level agreements, and the compliance requirements. Undergoing a security assessment,
establishing
a risk management strategy, and hardening the hosting server are not the first steps when
purchasing COTS software, but they may be part of the subsequent steps, such as the
evaluation,
selection, and implementation of the software. Reference: CISSP All-in-One Exam Guide,
Eighth
Edition, Chapter 1: Security and Risk Management, page 64; Official (ISC)2 Guide to the
CISSP CBK,
Fifth Edition, Chapter 1: Security and Risk Management, page 56.
acquired software, but for the system that hosts the software. Data quality, provenance, and scaling
are not criteria for acquired software, but for the data that the software processes. Distributed, agile,
and bench testing are not criteria for acquired software, but for the software development and
testing methods. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software
Development Security, page 947; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7:
Software Development Security, page 869.
Which of the following steps should be performed FIRST when purchasing Commercial Off-The-Shelf
(COTS) software?
A. undergo a security assessment as part of authorization process
B. establish a risk management strategy
C. harden the hosting server, and perform hosting and application vulnerability scans
D. establish policies and procedures on system and services acquisition
Page 126
Question: 112
Question: 113
Answer: D
www.certifiedumps.com
Questions & Answers PDF
B. To choose the integrated development environment
C. To match the software requirements to the delivery plan
D. To project manage the software delivery
Which of the following techniques is known to be effective in spotting resource exhaustion
problems, especially with resources such as processes, memory, and connections?
A. Automated dynamic analysis
B. Automated static analysis
C. Manual code review
D. Fuzzing
Explanation:
The primary role of a scrum master in agile development is to match the software requirements to
the delivery plan. A scrum master is a facilitator who helps the development team and the product
owner to collaborate and deliver the software product incrementally and iteratively, following the
agile principles and practices. A scrum master is responsible for ensuring that the team follows the
scrum framework, which includes defining the product backlog, planning the sprints, conducting the
daily stand-ups, reviewing the deliverables, and reflecting on the process. A scrum master is not
responsible for choosing the primary development language, the integrated development
environment, or project managing the software delivery, although they may provide guidance and
support to the team on these aspects. Reference: CISSP All-in-One Exam Guide, Eighth Edition,
Chapter 8: Software Development Security, page 933; Official (ISC)2 Guide to the CISSP CBK, Fifth
Edition, Chapter 7: Software Development Security, page 855.
Explanation:
Fuzzing is a technique that is known to be effective in spotting resource exhaustion problems,
especially with resources such as processes, memory, and connections. Fuzzing is a type of testing
that involves sending random, malformed, or unexpected input to the system or application, and
observing its behavior and response. Fuzzing can help to identify resource exhaustion problems, such
as memory leaks, buffer overflows, or connection timeouts, which can affect the availability,
functionality, or security of the system or application. Fuzzing can also help to discover other types of
vulnerabilities, such as logic errors, input validation errors, or exception handling errors. Automated
dynamic analysis, automated static analysis, and manual code review are not techniques that are
Page 127
Question: 114
Answer: C
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which one of the following is an advantage of an effective release control strategy form a
configuration control standpoint?
A. Ensures that a trace for all deliverables is maintained and auditable
B. Enforces backward compatibility between releases
C. Ensures that there is no loss of functionality between releases
D. Allows for future enhancements to existing features
Explanation:
An advantage of an effective release control strategy from a configuration control standpoint is that it
ensures that a trace for all deliverables is maintained and auditable. Release control is a process that
manages the distribution and installation of software releases into the operational environment.
Configuration control is a process that maintains the integrity and consistency of the software
configuration items throughout the software development life cycle. An effective release control
strategy can help to ensure that a trace for all deliverables is maintained and auditable, which means
that the origin, history, and status of each software release can be tracked and verified. This can help
to prevent unauthorized or incompatible changes, as well as to facilitate troubleshooting and
recovery. Enforcing backward compatibility, ensuring no loss of functionality, and allowing for future
enhancements are not advantages of release control from a configuration control standpoint, but
from a functionality or performance standpoint. Reference: CISSP All-in-One Exam Guide, Eighth
Edition, Chapter 8: Software Development Security, page 969; Official (ISC)2 Guide to the CISSP CBK,
Fifth Edition, Chapter 7: Software Development Security, page 895.
known to be effective in spotting resource exhaustion problems, although they may be used for other
types of testing or analysis. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8:
Software Development Security, page 1001; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition,
Chapter 7: Software Development Security, page 923.
Page 128
The design review for an application has been completed and is ready for release. What technique should an
organization use to assure application integrity?
A. Application authentication
B. Input validation
C. Digital signing
Question: 115
Question: 116
Answer: A
www.certifiedumps.com
D. Device encryption
Questions & Answers PDF
What is the BEST location in a network to place Virtual Private Network (VPN) devices when an
internal review reveals network design flaws in remote access?
A. In a dedicated Demilitarized Zone (DMZ)
B. In its own separate Virtual Local Area Network (VLAN)
C. At the Internet Service Provider (ISP)
D. Outside the external firewall
Explanation:
The best location in a network to place Virtual Private Network (VPN) devices when an internal
review reveals network design flaws in remote access is in a dedicated Demilitarized Zone (DMZ). A
DMZ is a network segment that is located between the internal network and the external network,
such as the internet. A DMZ is used to host the services or devices that need to be accessed by both
the internal and external users, such as web servers, email servers, or VPN devices. A VPN device is
a device that enables the establishment of a VPN, which is a secure and encrypted connection
between two networks or endpoints over a public network, such as the internet. Placing the VPN
devices in a dedicated DMZ can help to improve the security and performance of the remote access,
as well as to isolate the VPN devices from the internal network and the external network. Placing
the VPN devices in its own separate VLAN, at the ISP, or outside the external firewall are not the
best locations, as they may expose the VPN devices to more risks, reduce the control over the VPN
Explanation:
The technique that an organization should use to assure application integrity is digital signing. Digital
signing is a technique that uses cryptography to generate a digital signature for a message or a
document, such as an application. The digital signature is a value that is derived from the message
and the sender’s private key, and it can be verified by the receiver using the sender’s public key.
Digital signing can help to assure application integrity, which means that the application has not
been altered or tampered with during the transmission or storage. Digital signing can also help to
assure application authenticity, which means that the application originates from the legitimate
source. Application authentication, input validation, and device encryption are not techniques that
can assure application integrity, but they can help to assure application security, usability, or
confidentiality, respectively. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5:
Security Engineering, page 607; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3:
Security Architecture and Engineering, page 388.
Page 129
Question: 117
Answer: C
Answer: A
www.certifiedumps.com
What Is the FIRST step in establishing an information security program?
A. Establish an information security policy.
B. Identify factors affecting information security.
C. Establish baseline security controls.
D. Identify critical security infrastructure.
Which of the following is MOST effective in detecting information hiding in Transmission Control
Protocol/internet Protocol (TCP/IP) traffic?
A. Stateful inspection firewall
B. Application-level firewall
C. Content-filtering proxy
D. Packet-filter firewall
Explanation:
The first step in establishing an information security program is to establish an information security
policy. An information security policy is a document that defines the objectives, scope, principles,
and responsibilities of the information security program. An information security policy provides the
foundation and direction for the information security program, as well as the basis for the
development and implementation of the information security standards, procedures, and guidelines.
An information security policy should be approved and supported by the senior management, and
communicated and enforced across the organization. Identifying factors affecting information
security, establishing baseline security controls, and identifying critical security infrastructure are not
the first steps in establishing an information security program, but they may be part of the
subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. Reference: CISSP
All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official
(ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
Questions & Answers PDF Page 130
devices, or create a single point of failure for the remote access. Reference: CISSP All-in-One Exam
Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2
Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 509.
Question: 118
Question: 119
Answer: B
Answer: A
www.certifiedumps.com
Which of the following is the BEST way to reduce the impact of an externally sourced
flood attack? A. Have the service provider block the soiree address. B. Have the soiree
service provider block the address. C. Block the source address at the firewall. D. Block all
inbound traffic until the flood ends.
Explanation:
The best way to reduce the impact of an externally sourced flood attack is to have the service
provider block the source address. A flood attack is a type of denial-of-service attack that aims to
overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP
packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from
outside the target’s network, such as from the internet. Having the service provider block the source
address can help to reduce the impact of an externally sourced flood attack, as it can prevent the
malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and
resources. Having the source service provider block the address, blocking the source address at the
firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact
of an externally sourced flood attack, as they may not be feasible, effective, or efficient,
respectively. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and
Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4:
Communication and Network Security, page 525.
Questions & Answers PDF Page 131
Explanation: An application-level firewall is the most effective in detecting information
hiding in TCP/IP traffic.
Information hiding is a technique that conceals data or messages within other data or
messages,
such as using steganography, covert channels, or encryption. An application-level firewall is
a type of
firewall that operates at the application layer of the OSI model, and inspects the content
and context
of the network packets, such as the headers, payloads, or protocols. An application-level
firewall can
help to detect information hiding in TCP/IP traffic, as it can analyze the data for any
anomalies,
inconsistencies, or violations of the expected format or behavior. A stateful inspection
firewall, a
content-filtering proxy, and a packet-filter firewall are not as effective in detecting
information hiding
in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the
state, content,
or header of the network packets, respectively. Reference: CISSP All-in-One Exam Guide,
Eighth
Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide
to the
CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511.
Question: 120
Answer: A
Topic 13, NEW Questions C
www.certifiedumps.com
Questions & Answers PDF
Which of the following is used to support the of defense in depth during development phase of a
software product?
A. Security auditing
B. Polyinstantiation
C. Maintenance
D. Known vulnerability list
Explanation: Polyinstantiation is a technique that creates multiple versions of the same
data with different
security labels. This can prevent unauthorized users from inferring sensitive information
from
aggregated data or queries. Polyinstantiation can support the principle of defense in depth
during
the development phase of a software product by providing an additional layer of
protection for data
confidentiality and integrity. Reference: 1: CISSP CBK, 4th edition, page 352 2: CISSP Official
(ISC)2 Practice Tests, 3rd edition, page 123
Company A is evaluating new software to replace an in-house developed application. During the
acquisition process. Company A specified the security retirement, as well as the functional
requirements. Company B responded to the acquisition request with their flagship product that runs
on an Operating System (OS) that Company A has never used nor evaluated. The flagship product
meets all security -and functional requirements as defined by Company A. Based upon Company
B's response, what step should Company A take?
A. Move ahead with the acpjisition process, and purchase the flagship software
B. Conduct a security review of the OS
C. Perform functionality testing
D. Enter into contract negotiations ensuring Service Level Agreements (SLA) are established to
include security patching
Page 132
Question: 121
Question: 122
Answer: B
Answer: B
www.certifiedumps.com
Questions & Answers PDF
What is maintained by using write blocking devices when forensic evidence is examined?
A. Inventory
B. lntegrity
C. Confidentiality
D. Availability
DRAG DROP
Match the level of evaluation to the correct common criteria (CC) assurance level.
Drag each level of evaluation on the left to is corresponding CC assurance level on the right
Explanation:
Write blocking devices are used to prevent any modification of the forensic evidence when it is
examined. This preserves the integrity of the evidence and ensures its admissibility in court. Write
blocking devices do not affect the inventory, confidentiality, or availability of the
evidence. Reference: 1, p. 1030; [4], p. 17
Explanation:
Company A should conduct a security review of the OS that Company B’s flagship product runs on,
since it is unfamiliar to them and may introduce new risks or vulnerabilities. The security review
should evaluate the OS’s security features, patches, updates, configuration, and compatibility with
Company A’s environment. Moving ahead with the acquisition process without reviewing the OS,
performing functionality testing, or entering into contract negotiations are premature steps that may
compromise Company A’s security posture. Reference: 1, p. 1019; 3, p. 15
Page 133
Question: 123
Question: 124
Answer: B
www.certifiedumps.com
Questions & Answers PDF
Explanation:
The correct matches are as follows:
Structurally tested -> Assurance Level 2
Methodically tested and checked -> Assurance Level 3
Methodically designed, tested, and reviewed -> Assurance Level 4
Functionally tested -> Assurance Level 1
Semiformally verified design and tested -> Assurance Level 6
Formally verified design and tested -> Assurance Level 7
Semiformally designed and tested -> Assurance Level 5
The Common Criteria (CC) is an international standard for evaluating the security and assurance of
information technology products and systems. The CC defines seven levels of evaluation assurance
levels (EALs), ranging from EAL1 (the lowest) to EAL7 (the highest), that indicate the degree of
confidence and rigor in the evaluation process. Each EAL consists of a set of assurance components
that specify the requirements for the security functions, development, guidance, testing,
vulnerability analysis, and life cycle support of the product or system. The CC also defines several
levels of evaluation that correspond to the EALs, based on the methods and techniques used to
Page 134
Answer:
www.certifiedumps.com
Questions & Answers PDF
Which is the second phase of public key Infrastructure (pk1) key/certificate life-cycle management?
A. Issued Phase
B. Cancellation Phase
C. Implementation phase
D. Initialization Phase
evaluate the product or system. The levels of evaluation are:
Functionally tested: The product or system is tested against its functional specification and provides a
basic level of assurance. This level corresponds to EAL1.
Structurally tested: The product or system is tested against its functional and high-level design
specifications and provides a low level of assurance. This level corresponds to EAL2.
Methodically tested and checked: The product or system is tested against its functional, high-level,
and low-level design specifications and provides a moderate level of assurance. This level
corresponds to EAL3.
Methodically designed, tested, and reviewed: The product or system is tested against its functional,
high-level, low-level, and implementation specifications and provides a moderate to high level of
assurance. This level corresponds to EAL4.
Semiformally designed and tested: The product or system is tested against its functional, high-level,
low-level, and implementation specifications, using a semiformal notation and methods. This level
provides a high level of assurance. This level corresponds to EAL5.
Semiformally verified design and tested: The product or system is tested against its functional, high-
level, low-level, and implementation specifications, using a semiformal notation and methods, and
verified against a formal security model. This level provides a higher level of assurance. This level
corresponds to EAL6.
Formally verified design and tested: The product or system is tested against its functional, high-level,
low-level, and implementation specifications, using a formal notation and methods, and verified
against a formal security model. This level provides the highest level of assurance. This level
corresponds to EAL7.
Reference: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3: Security Engineering, Section:
Security Evaluation Models, Subsection: Common Criteria; CISSP All-in-One Exam Guide, Eighth
Edition, Chapter 3: Security Engineering, Section: Evaluation Criteria.
Page 135
Question: 125
www.certifiedumps.com
Questions & Answers PDF
Limiting the processor, memory, and Input/output (I/O) capabilities of mobile code is known as
A. code restriction.
B. on-demand compile.
Which of the following is MOST important when determining appropriate countermeasures for an
identified risk?
A. Interaction with existing controls
B. Cost
C. Organizational risk tolerance
D. Patch availability
Explanation:
The second phase of public key infrastructure (PKI) key/certificate life-cycle management is the
issued phase, where the certificate authority (CA) issues a digital certificate to the requester after
verifying their identity and public key. The certificate contains the public key, the identity of the
owner, the validity period, the serial number, and the digital signature of the CA. The certificate is
then published in a repository or directory for others to access and validate. Reference: CISSP Study
Guide: Key Management Life Cycle, Key Management - OWASP Cheat Sheet Series, CISSP 2021:
Software Development Lifecycles & Ecosystems
Explanation:
The most important factor when determining appropriate countermeasures for an identified risk is
the organizational risk tolerance, which is the level of risk that the organization is willing to accept or
reject. The risk tolerance reflects the organization’s mission, objectives, culture, and values, and
influences the selection and implementation of security controls. The risk tolerance also helps to
balance the cost and benefit of the countermeasures, as well as the interaction with existing controls
and the availability of patches. Reference: CISSP domain 1: Security and risk management, Risk
management concepts and the CISSP (part 1), Learn About the Different Types of Risk Analysis in
CISSP, Risk Response, countermeasures, considerations and controls, The 8 CISSP Domains Explained
Page 136
Question: 126
Question: 127
Answer: C
Answer: A
www.certifiedumps.com
Questions & Answers PDF
C. sandboxing.
D. compartmentalization.
Which of the following security testing strategies is BEST suited for companies with low to moderate
security maturity?
A. Load Testing
B. White-box testing
C. Black -box testing
D. Performance testing
Explanation: Mobile code is a term that refers to any code that can be transferred from one
system to another and
executed on the target system, such as Java applets, ActiveX controls, or JavaScript scripts.
Limiting
the processor, memory, and input/output (I/O) capabilities of mobile code is known as
sandboxing.
Sandboxing is a security technique that isolates the mobile code from the rest of the system
and
restricts its access to the system resources, such as files, network, or registry. Sandboxing
can prevent
the mobile code from causing harm or damage to the system, such as installing malware,
stealing
data, or modifying settings. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter
8:
Software Development Security, page 431; [Official (ISC)2 CISSP CBK Reference, Fifth
Edition, Chapter
8: Software Development Security, page 571]
Explanation:
Black-box testing is a security testing strategy that simulates an external attack on a system or
application, without any prior knowledge of its internal structure, design, or implementation. Black-
box testing is best suited for companies with low to moderate security maturity, as it can reveal the
most obvious and common vulnerabilities, such as misconfigurations, default credentials, or
unpatched software. Black-box testing can also provide a realistic assessment of the system’s security
posture from an attacker’s perspective. Reference: CISSP All-in-One Exam Guide, Eighth Edition,
Chapter 6: Security Assessment and Testing, page 287; [Official (ISC)2 CISSP CBK Reference, Fifth
Edition, Chapter 6: Security Assessment and Testing, page 413]
Page 137
Question: 128
Question: 129
Answer: C
Answer: C
www.certifiedumps.com
Questions & Answers PDF
Which of the following are core categories of malicious attack against Internet of Things (IOT)
devices?
A. Packet capture and false data injection
B. Packet capture and brute force attack
C. Node capture 3nd Structured Query Langue (SQL) injection
D. Node capture and false data injection
What is the document that describes the measures that have been implemented or planned to
correct any deficiencies noted during the assessment of the security controls?
A. Business Impact Analysis (BIA)
B. Security Assessment Report (SAR)
C. Plan of Action and Milestones {POA&M)
D. Security Assessment Plan (SAP)
Explanation:
Node capture and false data injection are core categories of malicious attack against Internet of
Things (IoT) devices. Node capture is an attack that compromises a physical IoT device and gains
access to its data, configuration, or functionality. False data injection is an attack that alters or
fabricates the data transmitted or received by an IoT device, which can affect the integrity,
availability, or reliability of the IoT system. These attacks can have serious consequences for IoT
applications that involve critical infrastructure, health care, or smart cities. Reference: CISSP All-in-
One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 195;
[Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security,
page 269]
Page 138
Explanation:
The document that describes the measures that have been implemented or planned to correct any deficienc
noted during the assessment of the security controls is the Plan of Action and Milestones (POA&M). A POA&
tool that helps to track and manage the remediation actions for the identified weaknesses or gaps in the sec
controls. A POA&M typically includes the following elements: the description of the weakness, the source of
weakness, the risk level of the
Question: 130
Answer: C
Answer: B
www.certifiedumps.com
Questions & Answers PDF
Which of the following is a characteristic of a challenge/response authentication process?
A. Presenting distorted graphics of text for authentication
B. Transmitting a hash based on the user's password
C. Using a password history blacklist
D. Requiring the use of non-consecutive numeric characters
DRAG DROP
Given a file containing ordered number, i.e. “123456789,” match each of the following redundant
Array of independent Disks (RAID) levels to the corresponding visual representation visual
representation. Note: P() = parity.
Drag each level to the appropriate place on the diagram.
Explanation:
A characteristic of a challenge/response authentication process is transmitting a hash based on the
user’s password. A challenge/response authentication process is a type of authentication method
that involves the exchange of a challenge and a response between the authenticator and the
authenticatee. The challenge is usually a random or unpredictable value, such as a nonce or a
timestamp, that is sent by the authenticator to the authenticatee. The response is usually a value
that is derived from the challenge and the user’s password, such as a hash or a message
authentication code (MAC), that is sent by the authenticatee to the authenticator. The authenticator
then verifies the response by applying the same algorithm and password to the challenge, and
comparing the results. If the response matches the expected value, the authentication is successful.
Transmitting a hash based on the user’s password can provide a secure and efficient way of proving
the user’s identity, without revealing the password in plaintext or requiring the storage of the
password on the authenticator. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5:
Identity and Access Management, page 208; [Official (ISC)2 CISSP CBK Reference, Fifth Edition,
Chapter 5: Identity and Access Management, page 297]
weakness, the proposed corrective action, the responsible party, the estimated completion date, and
the status of the action. A POA&M can help to prioritize the remediation efforts, monitor the
progress, and report the results. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6:
Security Assessment and Testing, page 295; [Official (ISC)2 CISSP CBK Reference, Fifth Edition,
Chapter 6: Security Assessment and Testing, page 421]
Page 139
Question: 131
Question: 132
Answer: B
www.certifiedumps.com
Questions & Answers PDF
Explanation:
Drag each level to the appropriate place on the diagram.
RAID 1 -> Top left RAID 0 -> Top right RAID 5 -> Bottom left RAID 10 -> Bottom right
Comprehensive Explanation: The correct answer is to drag each level to the appropriate place on the
diagram as shown below:
![RAID levels]
The rationale for the answer is based on the definition and characteristics of each RAID level and the
given file containing ordered numbers. RAID stands for Redundant Array of Independent Disks, and it
is a technology that combines multiple physical disks into a logical unit that provides improved
performance, reliability, or capacity. RAID levels are the different ways of organizing and distributing
data across the disks, using techniques such as mirroring, striping, or parity. Mirroring means
creating an exact copy of the data on another disk, which provides fault tolerance and redundancy.
Striping means dividing the data into blocks and spreading them across multiple disks, which
provides speed and performance. Parity means calculating and storing an extra bit of information
that can be used to reconstruct the data in case of a disk failure, which provides error correction and
fault tolerance.
Page 140
Answer:
www.certifiedumps.com
Questions & Answers PDF Page 141
RAID 1 is a RAID level that uses mirroring to create an exact copy of the data on another disk. RAID 1
requires at least two disks, and it provides high reliability and availability, as the data can be accessed
from either disk if one fails. However, RAID 1 does not provide any performance improvement, and it
has a high storage overhead, as it duplicates the data. In the diagram, RAID 1 is represented by two
disks with identical data (123456789).
RAID 0 is a RAID level that uses striping to divide the data into blocks and spread them across
multiple disks. RAID 0 requires at least two disks, and it provides high performance and speed, as the
data can be read or written in parallel from multiple disks. However, RAID 0 does not provide any
fault tolerance or redundancy, and it has a high risk of data loss, as the failure of any disk will result in
the loss of the entire data. In the diagram, RAID 0 is represented by two disks with data split between
them (123 and 456789).
RAID 5 is a RAID level that uses striping with parity to distribute the data and the parity information
across multiple disks. RAID 5 requires at least three disks, and it provides a balance of performance,
reliability, and capacity, as the data can be read or written in parallel from multiple disks, and the
data can be recovered from the parity information if one disk fails. However, RAID 5 has a
performance penalty for write operations, as it requires extra calculations and disk operations to
update the parity information. In the diagram, RAID 5 is represented by three disks where data is
striped across two disks (123 and 789), and the third disk contains parity information (P(456+789)
and P(123+456)).
RAID 10 is a RAID level that combines RAID 1 and RAID 0, meaning that it uses mirroring and striping
to create a nested array of disks. RAID 10 requires at least four disks, and it provides high
performance, reliability, and availability, as the data can be read or written in parallel from multiple
mirrored disks, and the data can be accessed from either disk if one fails. However, RAID 10 has a
high storage overhead, as it duplicates the data, and it requires more disks and controllers to
implement. In the diagram, RAID 10 is represented by four disks combining both mirroring and
striping techniques (123 and 123, 456789 and 456789).
Reference:
[RAID]
[RAID Levels Explained]
[RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams]
www.certifiedumps.com
Questions & Answers PDF
Which of the following media is LEAST problematic with data remanence?
A. Dynamic Random Access Memory (DRAM)
B. Electrically Erasable Programming Read-Only Memory (BPRCM)
C. Flash memory
D. Magnetic disk
Explanation:
Dynamic Random Access Memory (DRAM) is the least problematic with data remanence. Data
remanence is the residual representation of data that remains on a storage medium after it has been
erased or overwritten. Data remanence poses a security risk, as it may allow unauthorized access or
recovery of sensitive data. DRAM is a type of volatile memory that requires constant power to retain
data. Once the power is turned off, the data stored in DRAM is quickly lost, making it difficult to
recover or analyze. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4:
Communication and Network Security, page 160. CISSP Testking ISC Exam Questions, Question 10.
Page 142
Question: 133
Question: 134
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Which of the following needs to be taken into account when assessing vulnerability?
A. Risk identification and validation
B. Threat mapping
C. Risk acceptance criteria
D. Safeguard selection
Organization A is adding a large collection of confidential data records that it received when it
acquired Organization B to its data store. Many of the users and staff from Organization B are no
longer available. Which of the following MUST Organization A 0do to property classify and secure the
acquired data?
A. Assign data owners from Organization A to the acquired data.
B. Create placeholder accounts that represent former users from Organization B.
C. Archive audit records that refer to users from Organization A.
D. Change the data classification for data acquired from Organization B.
Explanation:
Risk identification and validation are the factors that need to be taken into account when assessing
vulnerability. A vulnerability is a weakness or a flaw in a system or an application that can be
exploited by an attacker to compromise the security or the functionality of the system or the
application. Vulnerability assessment is the process of identifying, analyzing, and evaluating the
vulnerabilities that may affect the system or the application. Vulnerability assessment is part of the
risk management process, which is the process of identifying, assessing, and mitigating the risks that
may affect the organization’s information systems and assets. Risk identification and validation are
the steps in the risk management process that involve identifying the potential sources and causes of
risk, such as threats, vulnerabilities, and impacts, and validating the accuracy and the relevance of
the risk information. Risk identification and validation can help determine the scope and the priority
of the vulnerability assessment, and ensure that the vulnerability assessment results are consistent
and reliable. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk
Management, page 5. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
Page 143
Question: 135
Answer: A
www.certifiedumps.com
Questions & Answers PDF
Which of the following is considered the PRIMARY security issue associated with encrypted e-mail
messages?
A. Key distribution
B. Storing attachments in centralized repositories
C. Scanning for viruses and other malware
D. Greater costs associated for backups and restores
Explanation:
Encrypted e-mail messages are e-mail messages that are protected by encryption, which is a
method of transforming the plaintext into ciphertext, using a secret key and an algorithm. Encryption
ensures the confidentiality, integrity, and authenticity of the e-mail messages, as only the authorized
parties can decrypt and read the messages, and any modification or forgery of the messages can be
detected. The primary security issue associated with encrypted e-mail messages is key distribution,
which is the process of securely exchanging the secret keys between the sender and the receiver of
the e-mail messages. Key distribution is challenging, as it requires a secure and reliable channel, a
trusted third party, or a public key infrastructure (PKI) to ensure that the keys are not compromised,
intercepted, or tampered with. If the keys are not distributed properly, the encrypted e-mail
messages may not be decrypted or verified by the intended parties, or may be decrypted or forged
by the unauthorized parties. Storing attachments in centralized repositories is not a security issue
associated with encrypted e-mail messages, as it is a method of reducing the size and the bandwidth
of the e-mail messages, by storing the attachments in a cloud service or a file server, and sending
only the links to the attachments in the e-mail messages. Scanning for viruses and other malware is
not a security issue associated with encrypted e-mail messages, as it is a method of detecting and
Explanation:
Data ownership is a key concept in data security and classification. Data owners are responsible for
defining the value, sensitivity, and classification of the data, as well as the access rights and controls
for the data. When Organization A acquires data from Organization B, it should assign data owners
from its own organization to the acquired data, so that they can properly classify and secure the data
according to Organization A’s policies and standards. Creating placeholder accounts, archiving audit
records, or changing the data classification are not sufficient or necessary steps to ensure the security
of the acquired data. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset
Security, page 67; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 2: Asset Security,
Question 2.4, page 76.
Page 144
Question: 136
Answer: A
Answer: A
www.certifiedumps.com
The Rivest-Shamir-Adleman (RSA) algorithm is BEST suited for which of the following operations?
A. Bulk data encryption and decryption
B. One-way secure hashing for user and message authentication
C. Secure key exchange for symmetric cryptography
D. Creating digital checksums for message integrity
Explanation:
A security professional has been requested by the Board of Directors and Chief Information Security
Officer (CISO) to perform an internal and external penetration test. A penetration test is a type of
security assessment that simulates a real-world attack on a system or a network, to identify and
exploit the vulnerabilities or weaknesses that may compromise the security. An internal penetration
test is performed from within the system or the network, to assess the security from the perspective
of an authorized user or an insider. An external penetration test is performed from outside the
system or the network, to assess the security from the perspective of an unauthorized user or an
outsider. The best course of action for the security professional is to review corporate security
policies and procedures, before performing the penetration test. The corporate security policies and
procedures are the documents that define the security goals, objectives, standards, and guidelines of
the organization, and that specify the roles, responsibilities, and expectations of the security
personnel and the stakeholders. The review of the corporate security policies and procedures will
help the security professional to understand the scope, objectives, and methodology of the
penetration test, and to ensure that the penetration test is aligned with the organization’s security
requirements and compliance. The review of the corporate security policies and procedures will also
help the security professional to obtain the necessary authorization, approval, and consent from the
organization and the stakeholders, to perform the penetration test legally and ethically. Reviewing
data localization requirements and regulations is not the best course of action for the security
professional, as it is the process of identifying and complying with the laws and regulations that
govern the collection, storage, and processing of the data in different jurisdictions. Reviewing data
localization requirements and regulations is important for the security professional, but it is not the
Questions & Answers PDF Page 145
removing the malicious code that may be embedded in the e-mail messages or the attachments.
Greater costs associated for backups and restores is not a security issue associated with encrypted e-
mail messages, as it is a method of preserving and recovering the e-mail messages or the
attachments in case of a data loss or a disaster. Reference: Official (ISC)2 Guide to the CISSP CBK, Fifth
Edition, Chapter 3: Security Engineering, page 105. CISSP All-in-One Exam Guide, Eighth Edition,
Chapter 4: Cryptography and Symmetric Key Algorithms, page 204.
Question: 137
Answer: C
www.certifiedumps.com
The disaster recovery (DR) process should always include
A. plan maintenance.
B. periodic vendor review.
C. financial data analysis.
D. periodic inventory review.
Explanation:
The disaster recovery (DR) process should always include plan maintenance. Plan maintenance is the
process of updating, reviewing, testing, and improving the DR plan to ensure its effectiveness and
Questions & Answers PDF Page 146
first step before performing the penetration test. Reviewing data localization requirements and
regulations is more relevant for the data protection and privacy aspects of the security, not for the
penetration testing aspects of the security. With notice to the Configuring a Wireless Access Point
(WAP) with the same Service Set Identifier external test is not a valid option, as it is not a coherent or
meaningful sentence. Configuring a Wireless Access Point (WAP) with the same Service Set Identifier
(SSID) is a process of setting up a wireless network device with a network name, to allow wireless
devices to connect to the network. This has nothing to do with performing a penetration test, or with
giving notice to the organization or the stakeholders. With notice to the organization, perform an
external penetration test first, then an internal test is not the best course of action for the security
professional, as it is not the first step before performing the penetration test. Giving notice to the
organization is important for the security professional, as it informs the organization and the
stakeholders about the purpose, scope, and timing of the penetration test, and it also helps to avoid
any confusion, disruption, or conflict with the normal operations of the system or the network.
However, giving notice to the organization is not the first step before performing the penetration
test, as the security professional should first review the corporate security policies and procedures,
and obtain the necessary authorization, approval, and consent from the organization and the
stakeholders. Performing an external penetration test first, then an internal test is not the best
course of action for the security professional, as it is not the first step before performing the
penetration test. Performing an external penetration test first, then an internal test is a possible way
of conducting the penetration test, but it is not the only way. The order and the method of
performing the penetration test may vary depending on the objectives, scope, and methodology of
the penetration test, and the security professional should follow the corporate security policies and
procedures, and the best practices and standards of the penetration testing
industry. Reference: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security
Assessment and Testing, page 291. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security
Assessment and Testing, page 353.
Question: 138
Answer: A
www.certifiedumps.com
Questions & Answers PDF
A developer begins employment with an information technology (IT) organization. On the first day,
the developer works through the list of assigned projects and finds that some files within those
projects aren't accessible, Other developers working on the same project have no trouble locating
and working on the. What is the MOST likely explanation for the discrepancy in access?
A. The IT administrator had failed to grant the developer privileged access to the servers.
B. The project files were inadvertently deleted.
C. The new developer's computer had not been added to an access control list (ACL).
D. The new developer's user account was not associated with the right roles needed for the projects.
efficiency. Plan maintenance is essential for the DR process, as it helps to keep the DR plan aligned
with the current business needs, objectives, and environment, as well as the best practices and
standards. Plan maintenance also helps to identify and resolve any gaps, issues, or weaknesses in the
DR plan, as well as to incorporate any feedback, lessons learned, or changes from the previous DR
tests or events. Plan maintenance should be performed regularly, as well as after any significant
changes in the organization, such as new systems, applications, processes, or personnel. Periodic
vendor review, financial data analysis, and periodic inventory review are not activities that should
always be included in the DR process. Periodic vendor review is the process of evaluating the
performance, quality, and reliability of the vendors that provide services or products to the
organization, such as backup, recovery, or cloud services. Periodic vendor review is important for the
DR process, as it helps to ensure that the vendors meet the contractual obligations and service level
agreements (SLAs) of the organization, as well as to identify and mitigate any risks or issues
associated with the vendors. However, periodic vendor review is not a mandatory activity for the DR
process, as it depends on the organization’s reliance on external vendors for its DR strategy. Financial
data analysis is the process of examining, interpreting, and reporting the financial data of the
organization, such as revenue, expenses, assets, liabilities, or cash flow. Financial data analysis is
important for the DR process, as it helps to determine the budget, resources, and priorities for the
DR plan, as well as to measure the financial impact and return on investment (ROI) of the DR plan.
However, financial data analysis is not a mandatory activity for the DR process, as it depends on the
organization’s financial goals and constraints for its DR strategy. Periodic inventory review is the
process of verifying, updating, and documenting the inventory of the organization, such as hardware,
software, data, or supplies. Periodic inventory review is important for the DR process, as it helps to
ensure that the organization has the adequate and accurate inventory for its DR plan, as well as to
identify and address any inventory shortages, surpluses, or discrepancies. However, periodic
inventory review is not a mandatory activity for the DR process, as it depends on the organization’s
inventory management and control for its DR strategy. Reference: Official (ISC)2 CISSP CBK
Reference, Fifth Edition, Domain 7, Security Operations, page 734. CISSP All-in-One Exam Guide,
Eighth Edition, Chapter 7, Security Operations, page 696.
Page 147
Question: 139
www.certifiedumps.com
Questions & Answers PDF
Which of the following is MOST likely the cause of the issue?
A. Channel overlap
B. Poor signal
C. Incorrect power settings
D. Wrong antenna type
A technician is troubleshooting a client's report about poor wireless performance. Using a client
monitor, the technician notes the following information:
Explanation:
The most likely explanation for the discrepancy in access is that the new developer’s user account
was not assigned the appropriate roles that correspond to the access rights for the project files. Roles
are a way of grouping users based on their functions or responsibilities within an organization, and
they can simplify the administration of access control policies. If the new developer’s user account
was not associated with the right roles, he or she would not be able to access the files that other
developers with the same roles can access. Reference: CISSP - Certified Information Systems Security
Professional, Domain 5. Identity and Access Management (IAM), 5.1 Control physical and logical
access to assets, 5.1.2 Manage identification and authentication of people, devices and services,
5.1.2.1 Identity management implementation; CISSP Exam Outline, Domain 5. Identity and Access
Management (IAM), 5.1 Control physical and logical access to assets, 5.1.2 Manage identification and
authentication of people, devices and services, 5.1.2.1 Identity management implementation
Page 148
Explanation:
The most likely cause of the issue is channel overlap. Channel overlap occurs when multiple wireless access
(WAPs) use the same or adjacent frequency channels, causing interference and degradation of the wireless s
The image shows that there are four WAPs with the same SSID (Corporate) using channels 9, 10, 11, and 6. T
channels are too close to each other and overlap in the 2.4GHz band, resulting in poor wireless performance
Question: 140
Answer: A
Answer: D
www.certifiedumps.com
Why is authentication by ownership stronger than authentication by knowledge?
A. It is easier to change.
B. It can be kept on the user's person.
C. It is more difficult to duplicate.
D. It is simpler to control.
Explanation: Authentication by ownership is stronger than authentication by knowledge
because it is more
difficult to duplicate. Authentication by ownership is a type of authentication that relies on
something that the user possesses, such as a smart card, a token, or a biometric feature.
Authentication by knowledge is a type of authentication that relies on something that the
user
knows, such as a password, a PIN, or a security question. Authentication by ownership is
more
difficult to duplicate than authentication by knowledge, as it requires physical access,
specialized
equipment, or sophisticated techniques to copy or forge the authentication factor.
Authentication by
knowledge is easier to duplicate than authentication by ownership, as it may be guessed,
cracked, or
stolen by various methods, such as brute force, social engineering, or phishing.
Authentication by
ownership is not necessarily easier to change, simpler to control, or more convenient to
keep on the
user’s person than authentication by knowledge, as these factors may depend on the
specific
Questions & Answers PDF Page 149
The issue can be resolved by changing the channels of the WAPs to non-overlapping ones, such as 1,
6, and 11. Reference: [CISSP - Certified Information Systems Security Professional], Domain 4.
Communication and Network Security, 4.1 Implement secure design principles in network
architectures, 4.1.3 Secure network components,
4.1.3.1 Wireless access points; [CISSP Exam Outline], Domain 4. Communication and Network
Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network
components, 4.1.3.1 Wireless access points
velopment Security, 8.1 Understand and integrate security in the software development life cycle,
8.1.1 Identify and apply security controls in development environments, 8.1.1.2 Security of the
software environments; CISSP Exam Outline, Domain 8. Software Development Security, 8.1
Understand and integrate security in the software development life cycle, 8.1.1 Identify and apply
security controls in development environments, 8.1.1.2 Security of the software environments
Question: 141
Answer: C
www.certifiedumps.com
Questions & Answers PDF
An organization's retail website provides its only source of revenue, so the disaster recovery plan
(DRP) must document an estimated time for each step in the plan.
Which of the following steps in the DRP will list the GREATEST duration of time for the service to be
fully operational?
A. Update the Network Address Translation (NAT) table.
B. Update Domain Name System (DNS) server addresses with domain registrar.
C. Update the Border Gateway Protocol (BGP) autonomous system number.
D. Update the web server network adapter configuration.
Explanation: The step in the disaster recovery plan (DRP) that will list the greatest duration
of time for the service
to be fully operational is to update the Domain Name System (DNS) server addresses with
the
domain registrar. DNS is a system that translates domain names, such as
www.example.com, into IP
addresses, such as 192.168.1.1, and vice versa. DNS enables users to access websites or
services by
using human-readable names, rather than numerical addresses. A domain registrar is an
entity that
manages the registration and reservation of domain names, and that maintains the records
of the
domain names and their corresponding DNS servers. A DNS server is a server that stores
and
provides the DNS records for a domain name, such as the IP address, the mail server, or the
name
server. In a disaster recovery scenario, where the primary website or service is unavailable
or
inaccessible due to a disaster, such as a fire, a flood, or a cyberattack, the DRP may involve
switching
to a backup or an alternate website or service that is hosted on a different location or a
different
provider. In order to do that, the DRP must update the DNS server addresses with the
domain
registrar, so that the domain name of the website or service points to the new IP address of
the
backup or the alternate website or service. However, this step may take a long time, as it
depends on
the propagation or the update of the DNS records across the internet, which may vary from
Page 150
Question: 142
Answer: B
www.certifiedumps.com
A cybersecurity engineer has been tasked to research and implement an ultra-secure
communications channel to protect the organization's most valuable intellectual property (IP). The
primary directive in this initiative is to ensure there Is no possible way the communications can be
intercepted without detection. Which of the following Is the only way to ensure this ‘outcome?
A. Diffie-Hellman key exchange
B. Symmetric key cryptography
C. [Public key infrastructure (PKI)
D. Quantum Key Distribution
Explanation:
The only way to ensure an ultra-secure communications channel that cannot be intercepted without
detection is to use Quantum Key Distribution (QKD). QKD is a technique that uses the principles of
quantum mechanics to generate and exchange cryptographic keys between two parties. QKD relies
on the properties of quantum particles, such as photons or electrons, to encode and transmit the
keys. QKD offers the following advantages for securing communications:
It provides unconditional security, as the keys are generated and exchanged in a random and
unpredictable manner, and cannot be computed or guessed by any algorithm or attacker.
It ensures perfect secrecy, as the keys are used only once and then discarded, and cannot be reused
or intercepted by any eavesdropper.
It enables detection of intrusion, as any attempt to observe or measure the quantum particles will
alter their state and introduce errors or anomalies in the communication, which can be noticed and
reported by the legitimate parties. QKD is currently limited by the distance, speed, and cost of the
quantum communication channels, but it is expected to become more feasible and widespread in
Questions & Answers PDF Page 151
BGP is a protocol that exchanges or advertises the routing information or the paths between
different autonomous systems or networks on the internet, such as ISPs, cloud providers, or
enterprises. BGP enables the optimal and efficient routing of the network traffic across the internet.
A web server network adapter is a hardware device that connects the web server to the network,
and that enables the web server to send or receive the network packets, such as HTTP requests or
responses. Updating the NAT table, the BGP autonomous system number, or the web server network
adapter configuration may be part of the DRP, but they will not list the greatest duration of time for
the service to be fully operational, as they can be done quickly or locally, and they do not depend on
the propagation or the update of the DNS records across the internet. Reference: Official (ISC)2 Guide
to the CISSP CBK, Fifth Edition, Chapter 19: Security Operations, page 1869.
Question: 143
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which of the following BEST obtains an objective audit of security controls?
A. The security audit is measured against a known standard.
B. The security audit is performed by a certified internal auditor.
C. The security audit is performed by an independent third-party.
D. The security audit produces reporting metrics for senior leadership.
the future, especially with the development of quantum networks and quantum computers.
Reference: CISSP All-in-One Exam Guide, Chapter 4: Communication and Network Security,
Section: Quantum Cryptography, pp. 252-253.
Explanation:
Software vulnerability remediation is the process of identifying and fixing the weaknesses or flaws in
a software application or system that could be exploited by attackers. Software vulnerability
remediation is most likely to cost the least to implement at the design stage of the Software
Development Life Cycle (SDLC), which is the phase where the requirements and specifications of the
software are defined and the architecture and components of the software are designed. At this
stage, the software developers can apply security principles and best practices, such as secure by
design, secure by default, and secure coding, to prevent or minimize the introduction of
vulnerabilities in the software. Remediation at the design stage is also easier and cheaper than at
later stages, such as development, testing, or deployment, because it does not require modifying or
rewriting the existing code, which could introduce new errors or affect the functionality or
performance of the software. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21:
Software Development Security, pp. 2021-2022; [Official (ISC)2 CISSP CBK Reference, Fifth Edition],
Domain 8: Software Development Security, pp. 1395-1396.
At what stage of the Software Development Life Cycle (SDLC) does software vulnerability remediation
MOST likely cost the least to implement?
A. Development
B. Testing
C. Deployment
D. Design
Page 152
Question: 144
Question: 145
Answer: D
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the MOST effective way to ensure the endpoint devices used by remote
users are compliant with an organization's approved policies before being allowed on the network?
A. Group Policy Object (GPO)
B. Network Access Control (NAC)
C. Mobile Device Management (MDM)
D. Privileged Access Management (PAM)
Explanation:
The best option that obtains an objective audit of security controls is to have the security audit
performed by an independent third-party. An independent third-party is an entity that is not
affiliated with or influenced by the organization or the system that is being audited, and that has the
expertise and credibility to conduct the security audit. An independent third-party can provide an
unbiased and impartial assessment of the security controls, and identify the strengths and
weaknesses of the system or network. An independent third-party can also provide
recommendations and best practices for improving the security posture of the system or network.
The other options are not as effective, because they may not be objective, consistent, or
comprehensive in their audit of security controls . Reference: [CISSP CBK, Fifth Edition, Chapter 1,
page 49]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 19].
Explanation:
The most effective way to ensure the endpoint devices used by remote users are compliant with an
organization’s approved policies before being allowed on the network is to use Network Access
Control (NAC). NAC is a security technique that involves verifying and enforcing the compliance of
the endpoint devices with the security policies and standards of the organization, before granting
them access to the network. NAC can check the attributes and characteristics of the endpoint
devices, such as device type, operating system, IP address, MAC address, or user identity, and
compare them with the predefined criteria and rules. NAC can also perform the network access
authentication and authorization, and the network health and compliance checks, such as antivirus,
firewall, or patch status. NAC can help to ensure the endpoint devices used by remote users are
compliant with an organization’s approved policies, as it can prevent or restrict the access of any non-
compliant or unauthorized endpoint devices, and reduce the security risks and vulnerabilities of the
network . Reference: [CISSP CBK, Fifth Edition, Chapter 4, page 378]; [CISSP Practice Exam – FREE 20
Questions and Answers, Question 19].
Page 153
Question: 146
Answer: B
Answer: C
www.certifiedumps.com
Questions & Answers PDF
Which of the following is the MAIN benefit of off-site storage?
A. Cost effectiveness
B. Backup simplicity
C. Fast recovery
D. Data availability
Which of the following would an information security professional use to recognize changes to
content, particularly unauthorized changes?
A. File Integrity Checker
B. Security information and event management (SIEM) system
C. Audit Logs
D. Intrusion detection system (IDS)
Explanation:
The main benefit of off-site storage is data availability. Off-site storage is a technique that involves
storing backup data or copies of data in a different location than the primary data source, such as a
Explanation:
The tool that an information security professional would use to recognize changes to content,
particularly unauthorized changes, is a File Integrity Checker. A File Integrity Checker is a type of
security tool that monitors and verifies the integrity and authenticity of the files or content, by
comparing the current state or version of the files or content with a known or trusted baseline or
reference, using various methods, such as checksums, hashes, or signatures. A File Integrity Checker
can recognize changes to content, particularly unauthorized changes, by detecting and reporting any
discrepancies or anomalies between the current state or version and the baseline or reference, such
as the addition, deletion, modification, or corruption of the files or content. A File Integrity Checker
can help to prevent or mitigate the unauthorized changes to content, by alerting the information
security professional, and by restoring the files or content to the original or desired state or version
. Reference: [CISSP CBK, Fifth Edition, Chapter 3, page 245]; [100 CISSP Questions, Answers and
Explanations, Question 18].
Page 154
Question: 147
Question: 148
Answer: C
Answer: D
www.certifiedumps.com
Which of the following attack types can be used to compromise the integrity of data during
transmission?
A. Keylogging
B. Packet sniffing
C. Synchronization flooding
D. Session hijacking
Explanation:
Packet sniffing is a type of attack that involves intercepting and analyzing the network traffic that is
transmitted between hosts. Packet sniffing can be used to compromise the integrity of data during
transmission, as the attacker can modify, delete, or inject packets into the network stream. Packet
sniffing can also be used to compromise the confidentiality and availability of data, as the attacker
can read, copy, or block packets. Keylogging, synchronization flooding, and session hijacking are all
types of attacks, but they do not directly affect the integrity of data during transmission. Keylogging
is a type of attack that involves capturing and recording the keystrokes of a user on a device.
Synchronization flooding is a type of attack that involves sending a large number of SYN packets to a
target host, causing it to exhaust its resources and deny service to legitimate requests. Session
hijacking is a type of attack that involves taking over an existing session between a user and a web
service, and impersonating the user or the service.
Questions & Answers PDF Page 155
remote data center, a cloud storage service, or a tape vault. Off-site storage can improve data
availability, which is the ability to access or use the data when needed, by providing an alternative
source of data in case of a disaster or an outage that affects the primary data source. Off-site storage
can also protect the data from theft, fire, flood, or other physical threats that may occur at the
primary data location. The other options are not the main benefits of off-site storage. Cost
effectiveness is not a benefit of off-site storage, as it may incur additional costs for the
transportation, maintenance, or security of the data. Backup simplicity is not a benefit of off-site
storage, as it may require more planning, coordination, or synchronization of the data. Fast recovery
is not a benefit of off-site storage, as it may depend on the distance, the bandwidth, or the format of
the data. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations,
page 1013. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page
1019.
Question: 149
Question: 150
Answer: B
www.certifiedumps.com
Questions & Answers PDF
Spyware is BEST described as
A. data mining for advertising.
B. A form of cyber-terrorism,
C. An information gathering technique,
D. A web-based attack.
Explanation:
Spyware is a type of malicious software that covertly collects and transmits information about the
user’s activities, preferences, or behavior, without the user’s knowledge or consent. Spyware is best
described as data mining for advertising, as the main purpose of spyware is to gather data that can
be used for targeted marketing or advertising campaigns. Spyware can also compromise the security
and privacy of the user, as it can expose sensitive or personal data, consume network bandwidth, or
degrade system performance. Spyware is not a form of cyber-terrorism, as it does not intend to
cause physical harm, violence, or fear. Spyware is not an information gathering technique, as it is not
a legitimate or ethical method of obtaining data. Spyware is not a web-based attack, as it does not
exploit the vulnerabilities of the web applications or protocols, but rather the vulnerabilities of the
user’s system or browser.
Page 156
Answer: A
www.certifiedumps.com
www.certifiedumps.com
[Limited Time Offer] Use Coupon "cert20" for extra 20%
discount on the purchase of PDF file. Test your CISSP
preparation with actual exam questions
https://www.certifiedumps.com/isc2/cissp-dumps.html
Thank You for trying CISSP PDF Demo
Start Your CISSP Preparation

More Related Content

PDF
wgu d488 (Cybersecurity Architecture and Engineering)
PDF
CISSP Exam Practice Questions Domain 1 to 4.pdf
PDF
InfosecTrain CISSP Exam Practice Questions and Answers domain 1 to 4
PDF
Master the top CISSP Practice Questions for Domains 1-4.pdf
PDF
Ready to conquer the CISSP exam? Master the top practice questions for Domain...
PDF
How to Pass CAS-005 in 2025: Expert Tips & Updated Objectives
PDF
Top-Rated CAS-005 Practice Strategy for 2025 Candidates
PDF
Get CompTIA Project+ PK0-005 Certified Quickly with Reliable and Verified Dum...
wgu d488 (Cybersecurity Architecture and Engineering)
CISSP Exam Practice Questions Domain 1 to 4.pdf
InfosecTrain CISSP Exam Practice Questions and Answers domain 1 to 4
Master the top CISSP Practice Questions for Domains 1-4.pdf
Ready to conquer the CISSP exam? Master the top practice questions for Domain...
How to Pass CAS-005 in 2025: Expert Tips & Updated Objectives
Top-Rated CAS-005 Practice Strategy for 2025 Candidates
Get CompTIA Project+ PK0-005 Certified Quickly with Reliable and Verified Dum...

Similar to Master CISSP in 2025: Practice with Purpose, Pass with Confidence (20)

PDF
Audit fieldwork
PDF
Dumpscafe CompTIA Security+ SY0-701 Exam Dumps
PDF
CAS-005 CompTIA SecurityX Certification Dumps PDF.pdf
PDF
Master the CSSLP Exam with 2025 Practice Questions & Strategy
PPTX
Data Centers In US
PDF
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
PDF
CISSP Exam Practice Domai 1 to 6 𝐌𝐚𝐬𝐭𝐞𝐫 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐃𝐨𝐦𝐚𝐢𝐧𝐬
PDF
Master the top practice questions for CISSP.pdf
PDF
CISSP Exam Practice Questions & Answers.pdf
PDF
CISSP Exam Practice Questions & Answers.pdf
PDF
CISSP Exam Practice Questions and Answers Domains 5-8
PDF
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
PDF
Packet capture and network traffic analysis
PDF
AcceleTest HIPAA Whitepaper
PDF
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
PDF
Top CompTIA Security+ Exam Practice Questions and Answers..pdf
PDF
Top compTIA Security+ Exam Practice Questions and Answers
PDF
Top Exam Practice Questions and Answers Comptia Security Plus
PDF
Ready to take on the CompTIA Security+ certification exam (SY0-701)?
PDF
Top CompTIA Security+ Exam Practice Questions and Answers.pdf
Audit fieldwork
Dumpscafe CompTIA Security+ SY0-701 Exam Dumps
CAS-005 CompTIA SecurityX Certification Dumps PDF.pdf
Master the CSSLP Exam with 2025 Practice Questions & Strategy
Data Centers In US
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
CISSP Exam Practice Domai 1 to 6 𝐌𝐚𝐬𝐭𝐞𝐫 𝐭𝐡𝐞 𝐭𝐨𝐩 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐃𝐨𝐦𝐚𝐢𝐧𝐬
Master the top practice questions for CISSP.pdf
CISSP Exam Practice Questions & Answers.pdf
CISSP Exam Practice Questions & Answers.pdf
CISSP Exam Practice Questions and Answers Domains 5-8
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
Packet capture and network traffic analysis
AcceleTest HIPAA Whitepaper
Rethinking Data Protection Strategies 1st Edition by Aberdeen group
Top CompTIA Security+ Exam Practice Questions and Answers..pdf
Top compTIA Security+ Exam Practice Questions and Answers
Top Exam Practice Questions and Answers Comptia Security Plus
Ready to take on the CompTIA Security+ certification exam (SY0-701)?
Top CompTIA Security+ Exam Practice Questions and Answers.pdf
Ad

More from 24servicehub (20)

PDF
Pass AWS AIF-C01 Easily with Certifiedumps Real Practice Dumps
PDF
Get Ready to Pass the Cisco 300-701 SCOR Exam with Confidence in 2025
PDF
Pass Your Cisco 200-301 CCNA Exam in 2025 with Confidence
PDF
Pass ADM-201 Exam in 2025 with Updated Dumps – Certifiedumps
PDF
Pass CCST-Networking Exam in 2025 with Updated Dumps PDF
PDF
Pass the CompTIA Security+ SY0-701 Exam in 2025 with Confidence – Certifiedumps
PDF
SAA-C03 Exam Dumps for 2025 – Pass Your AWS Associate Exam on First Attempt
PDF
Pass the AZ-500 Exam Easily with Certifiedumps Updated Dumps and Practice Tests
PDF
Pass Cisco 350-601 DCCOR Exam with Certifiedumps – Real Dumps for Data Center...
PDF
Clear Cisco 200-901 DEVASC Exam with Certifiedumps – Trusted Dumps for Fast C...
PDF
"Pass Cisco 200-301 CCNA Exam with Certifiedumps – Verified Dumps for Guarant...
PDF
AZ-700: Comprehensive Guide to Designing and Implementing Microsoft Azure Net...
PDF
The AZ-104 exam certifies skills in managing Microsoft Azure cloud services, ...
PDF
Microsoft Azure AI Fundamentals: Introduction to AI Concepts and Azure AI Ser...
PDF
AI-102: Designing and Implementing Azure AI Solutions
PDF
Pass Amazon CLF-C02 Exam with Certifiedumps
PDF
Clear Amazon ANS-C01 Exam with Certifiedumps
PDF
Pass Amazon AIF-C01 Exam with Certifiedumps
PDF
Certifiedumps SOA-C02 Exam Dumps – Prepare for AWS Certified SysOps Administr...
PDF
Pass Cisco 200-301 CCNA Exam with Certifiedumps – Latest Dumps Cover Networki...
Pass AWS AIF-C01 Easily with Certifiedumps Real Practice Dumps
Get Ready to Pass the Cisco 300-701 SCOR Exam with Confidence in 2025
Pass Your Cisco 200-301 CCNA Exam in 2025 with Confidence
Pass ADM-201 Exam in 2025 with Updated Dumps – Certifiedumps
Pass CCST-Networking Exam in 2025 with Updated Dumps PDF
Pass the CompTIA Security+ SY0-701 Exam in 2025 with Confidence – Certifiedumps
SAA-C03 Exam Dumps for 2025 – Pass Your AWS Associate Exam on First Attempt
Pass the AZ-500 Exam Easily with Certifiedumps Updated Dumps and Practice Tests
Pass Cisco 350-601 DCCOR Exam with Certifiedumps – Real Dumps for Data Center...
Clear Cisco 200-901 DEVASC Exam with Certifiedumps – Trusted Dumps for Fast C...
"Pass Cisco 200-301 CCNA Exam with Certifiedumps – Verified Dumps for Guarant...
AZ-700: Comprehensive Guide to Designing and Implementing Microsoft Azure Net...
The AZ-104 exam certifies skills in managing Microsoft Azure cloud services, ...
Microsoft Azure AI Fundamentals: Introduction to AI Concepts and Azure AI Ser...
AI-102: Designing and Implementing Azure AI Solutions
Pass Amazon CLF-C02 Exam with Certifiedumps
Clear Amazon ANS-C01 Exam with Certifiedumps
Pass Amazon AIF-C01 Exam with Certifiedumps
Certifiedumps SOA-C02 Exam Dumps – Prepare for AWS Certified SysOps Administr...
Pass Cisco 200-301 CCNA Exam with Certifiedumps – Latest Dumps Cover Networki...
Ad

Recently uploaded (20)

PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Trump Administration's workforce development strategy
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Classroom Observation Tools for Teachers
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Yogi Goddess Pres Conference Studio Updates
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
VCE English Exam - Section C Student Revision Booklet
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Trump Administration's workforce development strategy
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Pharmacology of Heart Failure /Pharmacotherapy of CHF
O7-L3 Supply Chain Operations - ICLT Program
Supply Chain Operations Speaking Notes -ICLT Program
Abdominal Access Techniques with Prof. Dr. R K Mishra
Final Presentation General Medicine 03-08-2024.pptx
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Microbial diseases, their pathogenesis and prophylaxis
Classroom Observation Tools for Teachers
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Yogi Goddess Pres Conference Studio Updates
Module 4: Burden of Disease Tutorial Slides S2 2025
VCE English Exam - Section C Student Revision Booklet

Master CISSP in 2025: Practice with Purpose, Pass with Confidence

  • 1. Questions & Answers (Demo Version - Limited Content) ISC2 CISSP Exam Certified Information Systems Security Professional (CISSP) https://www.certifiedumps.com/isc2/cissp-dumps.html Thank you for Downloading CISSP exam PDF Demo Get Full File:
  • 2. Questions & Answers PDF Exam Topics Topic 1: Exam Pool A Topic 2: Exam Pool B Topic 3: Security Architecture and Engineering Topic 4: Communication and Network Security Topic 5: Identity and Access Management (IAM) Topic 6: Security Assessment and Testing Topic 7: Security Operations Topic 8: Software Development Security Topic 9: Exam Set A Topic 10: Exam Set B Topic 11: Exam Set C Topic 12: New Questions B Topic 13: NEW Questions C Total Questions (Demo) Number of Questions 9 6 5 6 4 4 11 6 14 14 12 30 30 150 Page 2 Exam Topics DemoVersion Breakdown www.certifiedumps.co m
  • 3. Questions & Answers PDF A. determine the risk of a business interruption occurring B. determine the technological dependence of the business processes C. Identify the operational impacts of a business interruption D. Identify the financial impacts of a business interruption All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that Explanation: A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that: Identify the operational impacts of a business interruption, such as loss of revenue, customer satisfaction, reputation, legal obligations, etc. Identify the financial impacts of a business interruption, such as direct and indirect costs, fines, penalties, etc. Determine the technological dependence of the business processes, such as hardware, software, Page 3 Version: 42.0 Topic 1, Exam Pool A Question: 1 Answer: A www.certifiedumps.com
  • 4. Which of the following actions will reduce risk to a laptop before traveling to a high risk area? A. Examine the device for physical tampering B. Implement more stringent baseline configurations C. Purge or re-image the hard disk drive D. Change access codes Explanation: Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re- imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction. The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not Questions & Answers PDF Page 4 network, data, etc. Establish the recovery time objectives (RTO) and recovery point objectives (RPO) for each business process, which indicate the maximum acceptable downtime and data loss, respectively. The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level. Question: 2 Answer: C www.certifiedumps.com
  • 5. Explanation: Which of the following represents the GREATEST risk to data confidentiality? A. Network redundancies are not implemented B. Security awareness training is not completed C. Backup tapes are generated unencrypted D. Users have administrative privileges Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes. The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted. Questions & Answers PDF Page 5 unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive. Question: 3 Question: 4 Answer: C www.certifiedumps.com
  • 6. Explanation: When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities. The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating. A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with? Questions & Answers PDF Page 6 What is the MOST important consideration from a data security perspective when an organization plans to relocate? A. Ensure the fire prevention and detection systems are sufficient to protect personnel B. Review the architectural plans to determine how many emergency exits are present C. Conduct a gap analysis of a new facilities against existing security requirements D. Revise the Disaster Recovery and Business Continuity (DR/BC) plan Question: 5 Answer: C www.certifiedumps.com
  • 7. A. Application B. Storage C. Power D. Network Questions & Answers PDF When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined? A. Only when assets are clearly defined B. Only when standards are defined Explanation: A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan. A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes. However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored. Page 7 Question: 6 Answer: A www.certifiedumps.com
  • 8. Explanation: Questions & Answers PDF C. Only when controls are put in place D. Only procedures are defined Which of the following types of technologies would be the MOST cost-effective method to provide a When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures. Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed. Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined. The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities. Page 8 Question: 7 Answer: B www.certifiedumps.com
  • 9. Questions & Answers PDF reactive control for protecting personnel in public areas? A. Install mantraps at the building entrances B. Enclose the personnel entry area with polycarbonate plastic C. Supply a duress alarm for personnel exposed to the public D. Hire a guard to protect the public area Explanation: Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening. The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs. An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements? A. Development, testing, and deployment Page 9 Question: 8 Answer: C www.certifiedumps.com
  • 10. Explanation: Questions & Answers PDF B. Prevention, detection, and remediation C. People, technology, and operations D. Certification, accreditation, and monitoring A. Owner’s ability to realize financial gain B. Owner’s ability to maintain copyright C. Right of the owner to enjoy their creation D. Right of the owner to control delivery method Intellectual property rights are PRIMARY concerned with which of the following? Explanation: An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency. The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards. Page 10 Question: 9 Answer: C Answer: A www.certifiedumps.com
  • 11. Questions & Answers PDF Explanation: When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging. Which of the following is MOST important when assigning ownership of an asset to a department? A. The department should report to the business owner B. Ownership of the asset should be periodically reviewed C. Individual accountability should be ensured D. All members should be trained on their responsibilities Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy. The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one. Page 11 Topic 2, Exam Pool B Question: 10 Answer: C www.certifiedumps.com
  • 12. Explanation: Questions & Answers PDF Which one of the following affects the classification of data? A. Assigned security label B. Multilevel Security (MLS) architecture C. Minimum query size D. Passage of time The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities. The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time. The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time. Page 12 Question: 11 Answer: D www.certifiedumps.com
  • 13. Questions & Answers PDF Which of the following BEST describes the responsibilities of a data owner? A. Ensuring quality and validation through periodic audits for ongoing data integrity B. Maintaining fundamental data availability, including data storage and archiving C. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security D. Determining the impact the information has on the mission of the organization Explanation: The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data. Page 13 Question: 12 Answer: D www.certifiedumps.com
  • 14. Explanation: Questions & Answers PDF Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider. The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests. Which contract is BEST in offloading the task from the IT staff? A. Platform as a Service (PaaS) B. Identity as a Service (IDaaS) C. Desktop as a Service (DaaS) D. Software as a Service (SaaS) Page 14 Question: 13 Answer: B www.certifiedumps.com
  • 15. When implementing a data classification program, why is it important to avoid too much granularity? A. The process will require too many resources B. It will be difficult to apply to both hardware and software C. It will be difficult to assign ownership to the data D. The process will be perceived as having value Explanation: When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk. The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it Questions & Answers PDF Page 15 provides software applications for users to use, but it does not manage the user accounts for the software applications. Question: 14 Answer: A www.certifiedumps.com
  • 16. Explanation: Questions & Answers PDF A. system security managers B. business managers C. Information Technology (IT) managers D. end users In a data classification scheme, the data is owned by the In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing Page 16 Question: 15 Answer: B www.certifiedumps.com
  • 17. Explanation: Questions & Answers PDF Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key? A. Confidentiality B. Integrity C. Identification D. Availability The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows: The sender has a pair of public and private keys, and the public key is shared with the receiver in advance. The sender encrypts the plaintext message with its private key, which produces a ciphertext that is also a digital signature of the message. The sender sends the ciphertext to the receiver, along with the plaintext message or a hash of the message. The receiver decrypts the ciphertext with the sender’s public key, which produces the same plaintext message or hash of the message. The receiver compares the decrypted message or hash with the original message or hash, and verifies the identity of the sender if they match. Page 17 Topic 3, Security Architecture and Engineering Question: 16 Answer: C www.certifiedumps.com
  • 18. Explanation: Questions & Answers PDF Which of the following mobile code security models relies only on trust? A. Code signing B. Class authentication C. Sandboxing D. Type safety The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery. The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms. Page 18 Question: 17 Answer: A www.certifiedumps.com
  • 19. Which technique can be used to make an encryption scheme more resistant to a known plaintext attack? Questions & Answers PDF Page 19 Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows: The code provider has a pair of public and private keys, and obtains a digital certificate from a trusted third party, such as a certificate authority (CA), that binds the public key to the identity of the code provider. The code provider signs the mobile code with its private key and attaches the digital certificate to the mobile code. The code consumer receives the mobile code and verifies the signature and the certificate with the public key of the code provider and the CA, respectively. The code consumer decides whether to trust and execute the mobile code based on the identity and reputation of the code provider. Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious. The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations. Question: 18 www.certifiedumps.com
  • 20. Explanation: Questions & Answers PDF A. Hashing the data before encryption B. Hashing the data after encryption C. Compressing the data after encryption D. Compressing the data before encryption What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management? A. Implementation Phase Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext- ciphertext pairs for a successful attack. The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption. Page 20 Question: 19 Answer: D www.certifiedumps.com
  • 21. Explanation: B. Initialization Phase C. Cancellation Phase D. Issued Phase Questions & Answers PDF The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life- cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps: The end entity or the RA generates a key pair, consisting of a public key and a private key, using a secure and random method. The end entity or the RA creates a certificate request, which contains the public key and other identity information of the end entity, such as the name, email, organization, etc. The end entity or the RA submits the certificate request to the certification authority (CA), which is the trusted entity that issues and signs the certificates in the PKI system. The end entity or the RA securely stores the private key and protects it from unauthorized access, loss, or compromise. The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA. Page 21 Question: 20 Answer: B www.certifiedumps.com
  • 22. Explanation: Questions & Answers PDF Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments? A. Common Vulnerabilities and Exposures (CVE) B. Common Vulnerability Scoring System (CVSS) C. Asset Reporting Format (ARF) D. Open Vulnerability and Assessment Language (OVAL) The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation. The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and Page 22 Answer: B www.certifiedumps.com
  • 23. Explanation: What is the purpose of an Internet Protocol (IP) spoofing attack? A. To send excessive amounts of data to a process, making it unpredictable B. To intercept network traffic without authorization C. To disguise the destination address from a target’s IP filtering devices D. To convince a system that it is communicating with a known entity The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as: Bypassing IP-based access control lists (ACLs) or firewalls that filter traffic based on the source IP address. Launching denial-of-service (DoS) or distributed denial-of-service (DDoS) attacks by flooding a target system with spoofed packets, or by reflecting or amplifying the traffic from intermediate systems. Hijacking or intercepting a TCP session by predicting or guessing the sequence numbers and sending spoofed packets to the legitimate parties. Questions & Answers PDF Page 23 extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management. Question: 21 Answer: D Topic 4, Communication and Network Security www.certifiedumps.com
  • 24. Explanation: A. Link layer B. Physical layer C. Session layer D. Application layer Questions & Answers PDF At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located? Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc. Gaining unauthorized access to a system or network by impersonating a trusted or authorized host and exploiting its privileges or credentials. The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships. The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address. Page 24 Question: 22 Answer: B www.certifiedumps.com
  • 25. Questions & Answers PDF In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node? A. Transport layer B. Application layer C. Network layer D. Session layer A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure. Explanation: The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection- oriented or connectionless services, depending on the protocol used. TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows: Page 25 Question: 23 Answer: A www.certifiedumps.com
  • 26. Explanation: Questions & Answers PDF Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats? A. Layer 2 Tunneling Protocol (L2TP) B. Link Control Protocol (LCP) C. Challenge Handshake Authentication Protocol (CHAP) D. Packet Transfer Protocol (PTP) The client sends a SYN (synchronize) packet to the server, indicating its initial sequence number and requesting a connection. The server responds with a SYN-ACK (synchronize-acknowledge) packet, indicating its initial sequence number and acknowledging the client’s request. The client responds with an ACK (acknowledge) packet, acknowledging the server’s response and completing the connection. UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues. Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo- Page 26 Question: 24 Answer: B www.certifiedumps.com
  • 27. Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model? A. Packet filtering B. Port services filtering C. Content filtering D. Application access control Explanation: Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions. Questions & Answers PDF Page 27 reply, and discard-request, to communicate and exchange information between the PPP peers. The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload. Question: 25 Answer: A www.certifiedumps.com
  • 28. Explanation: Questions & Answers PDF An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control? A. Add a new rule to the application layer firewall B. Block access to the service C. Install an Intrusion Detection System (IDS) D. Patch the application source code Page 28 Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies. The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer. Question: 26 Answer: A www.certifiedumps.com
  • 29. Questions & Answers PDF Page 29 Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web- based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as: Injection attacks, such as SQL injection, command injection, or cross-site scripting (XSS), where the attacker inserts malicious code or commands into the input data that are executed by the system or the browser, resulting in data theft, data manipulation, or remote code execution. Buffer overflow attacks, where the attacker sends more input data than the system can handle, causing the system to overwrite the adjacent memory locations, resulting in data corruption, system crash, or arbitrary code execution. Denial-of-service (DoS) attacks, where the attacker sends malformed or invalid input data that cause the system to generate excessive errors or exceptions, resulting in system overload, resource exhaustion, or system failure. An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as: Filtering the data packets based on the application layer protocols, such as HTTP, FTP, or SMTP, and the application layer attributes, such as URLs, cookies, or headers. Blocking or allowing the data packets based on the predefined rules or policies that specify the criteria for the application layer protocols and attributes. Logging and auditing the data packets for the application layer protocols and attributes. Modifying or transforming the data packets for the application layer protocols and attributes. Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to: Reject or drop the data packets that contain SQL statements, shell commands, or script tags in the input data, which can prevent or reduce the injection attacks. Reject or drop the data packets that exceed a certain size or length in the input data, which can prevent or reduce the buffer overflow attacks. Reject or drop the data packets that contain malformed or invalid syntax or characters in the input www.certifiedumps.com
  • 30. A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization? A. Trusted third-party certification B. Lightweight Directory Access Protocol (LDAP) C. Security Assertion Markup language (SAML) D. Cross-certification Questions & Answers PDF Page 30 data, which can prevent or reduce the DoS attacks. Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system. The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch. Topic 5, Identity and Access Management (IAM) Question: 27 www.certifiedumps.com
  • 31. Questions & Answers PDF Explanation: Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as: Improving the user experience and convenience by reducing the need for multiple logins and passwords Enhancing the security and privacy by minimizing the exposure and duplication of sensitive information Increasing the efficiency and productivity by streamlining the authentication and authorization processes Reducing the cost and complexity by simplifying the identity management and administration SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM: Identity provider (IdP): the party that authenticates the user and issues the SAML assertion Service provider (SP): the party that provides the resource or service that the user wants to access User or principal: the party that requests access to the resource or service SAML works as follows: The user requests access to a resource or service from the SP The SP redirects the user to the IdP for authentication The IdP authenticates the user and generates a SAML assertion that contains the user’s identity, attributes, and entitlements The IdP sends the SAML assertion to the SP The SP validates the SAML assertion and grants or denies access to the user based on the information in the assertion Page 31 Answer: C www.certifiedumps.com
  • 32. Explanation: Questions & Answers PDF Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices? A. Derived credential B. Temporary security credential C. Mobile device credentialing service D. Digest authentication SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol. The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information. Page 32 Question: 28 Answer: A www.certifiedumps.com
  • 33. Questions & Answers PDF The user initiates a request to generate a derived credential on the mobile device The computer or the terminal verifies the smart card certificate with a trusted CA, and generates a derived credential that contains a cryptographic key and a certificate that are derived from the smart card private key and certificate The computer or the terminal transfers the derived credential to the mobile device, and stores it in a secure element or a trusted platform module on the device The user disconnects the mobile device from the computer or the terminal, and removes the smart card from the reader The user can use the derived credential on the mobile device to authenticate and encrypt the communication with other parties, without requiring the smart card or the PIN A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs. The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN). However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows: The user inserts the smart card into a reader that is connected to a computer or a terminal, and enters the PIN to unlock the smart card The user connects the mobile device to the computer or the terminal via a cable, Bluetooth, or Wi-Fi Page 33 www.certifiedumps.com
  • 34. Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary? A. Limit access to predefined queries B. Segregate the database into a small number of partitions each with a separate security level C. Implement Role Based Access Control (RBAC) D. Reduce the number of people who have access to the system for statistical purposes Explanation: Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as: Improving the performance and efficiency of the database by reducing the processing time and resources required for executing the queries Enhancing the security and confidentiality of the database by restricting the access and exposure of the sensitive data to the authorized users and purposes Questions & Answers PDF Page 34 private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key. Question: 29 Answer: A www.certifiedumps.com
  • 35. What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance? A. Audit logs B. Role-Based Access Control (RBAC) Questions & Answers PDF Page 35 Increasing the accuracy and reliability of the database by preventing the errors or inconsistencies that might occur due to the user input or modification of the queries Reducing the cost and complexity of the database by simplifying the query design and management Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data. The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries. Question: 30 www.certifiedumps.com
  • 36. Questions & Answers PDF C. Two-factor authentication D. Application of least privilege Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as: Improving the security and confidentiality of the information by limiting the access and exposure of the sensitive data to the authorized users and purposes Reducing the risk and impact of unauthorized access or disclosure of the information by minimizing the attack surface and the potential damage Increasing the accountability and auditability of the information by tracking and logging the access and usage of the sensitive data Enhancing the performance and efficiency of the system by reducing the complexity and overhead of the access control mechanisms Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department. The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or Page 36 Answer: D www.certifiedumps.com
  • 37. Explanation: Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as: Improving the security and compliance of the OS by applying the best practices and recommendations from the vendors, authorities, or frameworks Enhancing the performance and efficiency of the OS by optimizing the resources and functions Increasing the consistency and uniformity of the OS by reducing the variations and deviations Which of the following is of GREATEST assistance to auditors when reviewing system configurations? A. Change management processes B. User administration procedures C. Operating System (OS) baselines D. System backup documentation Questions & Answers PDF Page 37 functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on othe criteria or mechanisms. Topic 6, Security Assessment and Testing Question: 31 Answer: C www.certifiedumps.com
  • 38. A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files? A. Host VM monitor audit logs B. Guest OS access controls C. Host VM access controls D. Guest OS audit logs Questions & Answers PDF Page 38 Facilitating the monitoring and auditing of the OS by providing a baseline for comparison and measurement OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions. The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations. Question: 32 www.certifiedumps.com
  • 39. Questions & Answers PDF Explanation: Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as: Improving the utilization and efficiency of the physical resources by sharing them among multiple VMs Enhancing the security and isolation of the VMs by preventing or limiting the interference or communication between them Increasing the flexibility and scalability of the VMs by allowing them to be created, modified, deleted, or migrated easily and quickly A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions. The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs Page 39 Answer: D www.certifiedumps.com
  • 40. Questions & Answers PDF Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure? A. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken B. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability C. Management teams will understand the testing objectives and reputational risk to the organization D. Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels Explanation: Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as: Improving the security and risk management of the system or network by identifying and addressing the security weaknesses and gaps Enhancing the security and decision making of the system or network by providing the evidence and information for the security analysis, evaluation, and reporting Increasing the security and improvement of the system or network by providing the feedback and input for the security response, remediation, and optimization Page 40 what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs. Question: 33 Answer: D www.certifiedumps.com
  • 41. Questions & Answers PDF Page 41 A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800- 115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as: Executive summary: a brief overview of the security testing objectives, scope, methodology, results, and conclusions Introduction: a detailed description of the security testing background, purpose, scope, assumptions, limitations, and constraints Methodology: a detailed explanation of the security testing approach, techniques, tools, and procedures Results: a detailed presentation of the security testing findings, such as the vulnerabilities, threats, risks, and impact levels, organized by test phases or categories Recommendations: a detailed proposal of the security testing suggestions, such as the remediation, mitigation, or prevention strategies, prioritized by impact levels or risk ratings Conclusion: a brief summary of the security testing outcomes, implications, and future steps Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security. The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report www.certifiedumps.com
  • 42. Questions & Answers PDF Which of the following could cause a Denial of Service (DoS) against an authentication system? A. Encryption of audit logs B. No archiving of audit logs C. Hashing of audit logs D. Remote access audit logs Explanation: Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns. Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents. Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report. Page 42 Question: 34 Answer: D www.certifiedumps.com
  • 43. An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause? A. Absence of a Business Intelligence (BI) solution B. Inadequate cost modeling C. Improper deployment of the Service-Oriented Architecture (SOA) D. Insufficient Service Level Agreement (SLA) Explanation: Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for Questions & Answers PDF Page 43 rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs. Topic 7, Security Operations Question: 35 Answer: D www.certifiedumps.com
  • 44. Questions & Answers PDF Page 44 hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as: Improving the availability and accessibility of the website or web application by ensuring that it is online and reachable at all times Enhancing the performance and scalability of the website or web application by optimizing the speed, load, and capacity of the web server Increasing the security and reliability of the website or web application by providing the backup, recovery, and protection of the web data and content Reducing the cost and complexity of the website or web application by outsourcing the web hosting and management to a third-party provider A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as: Service description: a detailed explanation of the scope, purpose, and features of the service Service level objectives: a set of measurable and quantifiable goals or targets for the service quality, performance, and availability Service level indicators: a set of metrics or parameters that are used to monitor and evaluate the service level objectives Service level reporting: a process that involves collecting, analyzing, and communicating the service level indicators and objectives Service level penalties: a set of consequences or actions that are applied when the service level objectives are not met or violated Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution. The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or www.certifiedumps.com
  • 45. Questions & Answers PDF Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations? conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as: Improving the flexibility and scalability of the Web hosting solution by allowing the addition, modification, or removal of the software components or services without affecting the whole Web hosting solution Enhancing the interoperability and compatibility of the Web hosting solution by enabling the communication and interaction of the software components or services across different platforms and technologies Increasing the reusability and maintainability of the Web hosting solution by reducing the duplication and complexity of the software components or services However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution. Page 45 Question: 36 www.certifiedumps.com
  • 46. Questions & Answers PDF A. Walkthrough B. Simulation C. Parallel D. White box Explanation: Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as: Improving the confidence and competence of the organization and its staff in handling a disruption or disaster Enhancing the performance and efficiency of the organization and its systems in recovering from a disruption or disaster Increasing the compliance and alignment of the organization and its plans with the internal or external requirements and standards Facilitating the monitoring and improvement of the organization and its plans by identifying and addressing any gaps, issues, or risks There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are: Walkthrough: a type of business continuity test that involves reviewing and discussing the BCP and DRP with the relevant stakeholders, such as the business continuity team, the management, and the staff. A walkthrough can provide a basic and qualitative assessment of the BCP and DRP, and can help to familiarize and educate the stakeholders with the plans and their roles and responsibilities. Simulation: a type of business continuity test that involves performing and practicing the BCP and DRP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. A simulation can provide a realistic and quantitative assessment of the BCP and DRP, and can help to test and train the stakeholders with the plans and their actions and reactions. Parallel: a type of business continuity test that involves activating and operating the alternate site or system, while maintaining the normal operations at the primary site or system. A parallel test can Page 46 Answer: B www.certifiedumps.com
  • 47. Questions & Answers PDF What is the PRIMARY reason for implementing change management? A. Certify and approve releases to the environment B. Provide version rollbacks for system changes C. Ensure that all applications are approved D. Ensure accountability for changes to the environment Explanation: Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change provide a comprehensive and comparative assessment of the BCP and DRP, and can help to verify and validate the functionality and compatibility of the alternate site or system. Full interruption: a type of business continuity test that involves shutting down and transferring the normal operations from the primary site or system to the alternate site or system. A full interruption test can provide a conclusive and definitive assessment of the BCP and DRP, and can help to evaluate and measure the impact and effectiveness of the plans. Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems. The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system. Page 47 Question: 37 Answer: D www.certifiedumps.com
  • 48. Which of the following is a PRIMARY advantage of using a third-party identity service? A. Consolidation of multiple providers B. Directory synchronization C. Web based logon D. Automated account management Questions & Answers PDF Page 48 management can provide several benefits, such as: Improving the security and reliability of the system or network environment by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes Enhancing the performance and efficiency of the system or network environment by optimizing the resources and functions Increasing the compliance and alignment of the system or network environment with the internal or external requirements and standards Facilitating the monitoring and improvement of the system or network environment by tracking and logging the changes and their outcomes Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment. The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment. Question: 38 www.certifiedumps.com
  • 49. Questions & Answers PDF Explanation: Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as: Improving the user experience and convenience by allowing the users to access multiple applications or systems with a single sign-on (SSO) or a federated identity Enhancing the security and compliance by applying the consistent and standardized IAM policies and controls across multiple applications or systems Increasing the scalability and flexibility by enabling the integration and interoperability of multiple applications or systems with different platforms and technologies Reducing the cost and complexity by outsourcing the IAM functions to a third-party provider, and avoiding the duplication and maintenance of multiple IAM systems Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions. The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to Page 49 Question: 39 Answer: D www.certifiedumps.com
  • 50. Explanation: Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as: Improving the security and risk management of the system or network by identifying and addressing the security weaknesses and gaps Enhancing the security and decision making of the system or network by providing the evidence and information for the security analysis, evaluation, and reporting Increasing the security and improvement of the system or network by providing the feedback and input for the security response, remediation, and optimization Facilitating the compliance and alignment of the system or network with the internal or external requirements and standards A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment. Monitoring of a control should occur at a rate concurrent Questions & Answers PDF Page 50 With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions? A. Continuously without exception for all security controls B. Before and after each change of the control C. At a rate concurrent with the volatility of the security control D. Only during system implementation and decommissioning Answer: C www.certifiedumps.com
  • 51. Explanation: What should be the FIRST action to protect the chain of evidence when a desktop computer is involved? A. Take the computer to a forensic lab B. Make a copy of the hard drive C. Start documenting D. Turn off the computer Questions & Answers PDF Page 51 report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control. The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control. Question: 40 Answer: B www.certifiedumps.com
  • 52. What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application? A. Disable all unnecessary services B. Ensure chain of custody Questions & Answers PDF Page 52 Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as: The identity and role of the person who collected, handled, or transferred the evidence The date and time of the collection, handling, or transfer of the evidence The location and condition of the evidence The method and tool used to collect, handle, or transfer the evidence The signature or seal of the person who collected, handled, or transferred the evidence Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive. The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering. Question: 41 www.certifiedumps.com
  • 53. Questions & Answers PDF C. Prepare another backup of the system D. Isolate the system from the network Explanation: Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as: Providing a legitimate or useful function or service for the user, such as a utility or a tool Providing an illegitimate or malicious function or service for the attacker, such as a malware or a backdoor Providing a neutral or benign function or service for the developer, such as a trial or a demo Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as: Identifying and classifying the unknown application as legitimate, malicious, or neutral Determining and assessing the purpose and function of the unknown application Detecting and resolving any issues or risks caused by the unknown application Preventing and mitigating any future incidents or attacks involving the unknown application Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to: Prevent the unknown application from communicating or connecting with any other system or network, and potentially spreading or escalating the attack Prevent the unknown application from receiving or sending any commands or data, and potentially altering or deleting the evidence Prevent the unknown application from detecting or evading the forensic analysis, and potentially hiding or destroying itself Page 53 Answer: D www.certifiedumps.com
  • 54. Questions & Answers PDF A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following? A. Guaranteed recovery of all business functions B. Minimization of the need decision making during a crisis C. Insurance against litigation following a disaster D. Protection from loss of organization resources Explanation: Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as: Improving the resilience and preparedness of the organization and its staff in handling a disruption or disaster Enhancing the performance and efficiency of the organization and its systems in recovering from a disruption or disaster Increasing the compliance and alignment of the organization and its plans with the internal or external requirements and standards Facilitating the monitoring and improvement of the organization and its plans by identifying and addressing any gaps, issues, or risks The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss. Page 54 Question: 42 Answer: B www.certifiedumps.com
  • 55. When is a Business Continuity Plan (BCP) considered to be valid? A. When it has been validated by the Business Continuity (BC) manager B. When it has been validated by the board of directors C. When it has been validated by all threat scenarios D. When it has been validated by realistic exercises Questions & Answers PDF Page 55 Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans. The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses. Question: 43 www.certifiedumps.com
  • 56. Questions & Answers PDF Explanation: A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as: Business impact analysis: a process that identifies and prioritizes the critical business functions and processes, and assesses the potential impacts and risks of a disruption or disaster on them Recovery strategies: a process that defines and selects the appropriate methods and resources to recover the critical business functions and processes, such as alternate sites, backup systems, or recovery teams BCP document: a document that outlines and details the scope, purpose, and features of the BCP, such as the roles and responsibilities, the recovery procedures, and the contact information Testing, training, and exercises: a process that evaluates and validates the effectiveness and readiness of the BCP, and educates and trains the relevant stakeholders, such as the staff, the management, and the customers, on the BCP and their roles and responsibilities Maintenance and review: a process that monitors and updates the BCP, and addresses any changes or issues that might affect the BCP, such as the business requirements, the threat landscape, or the feedback and lessons learned A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as: Improving the confidence and competence of the organization and its staff in handling a disruption or disaster Enhancing the performance and efficiency of the organization and its systems in recovering from a disruption or disaster Increasing the compliance and alignment of the organization and its plans with the internal or external requirements and standards Facilitating the monitoring and improvement of the organization and its plans by identifying and addressing any gaps, issues, or risks Page 56 Answer: D www.certifiedumps.com
  • 57. Questions & Answers PDF Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following? A. Hardware and software compatibility issues B. Applications’ critically and downtime tolerance C. Budget constraints and requirements D. Cost/benefit analysis and business objectives The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios. Page 57 Question: 44 www.certifiedumps.com
  • 58. Questions & Answers PDF Explanation: Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as: Risk assessment: a process that identifies and evaluates the potential threats and vulnerabilities that might affect the IT systems and infrastructure, and estimates the likelihood and impact of a disruption or disaster Recovery objectives: a process that defines and quantifies the acceptable levels of recovery for the IT systems and infrastructure, such as the recovery point objective (RPO), which is the maximum amount of data loss that can be tolerated, and the recovery time objective (RTO), which is the maximum amount of downtime that can be tolerated Recovery strategies: a process that selects and implements the appropriate methods and resources to recover the IT systems and infrastructure, such as backup, replication, redundancy, or failover DRP document: a document that outlines and details the scope, purpose, and features of the DRP, such as the roles and responsibilities, the recovery procedures, and the contact information Testing, training, and exercises: a process that evaluates and validates the effectiveness and readiness of the DRP, and educates and trains the relevant stakeholders, such as the IT staff, the management, and the users, on the DRP and their roles and responsibilities Maintenance and review: a process that monitors and updates the DRP, and addresses any changes or issues that might affect the DRP, such as the IT requirements, the threat landscape, or the feedback and lessons learned Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to: Optimize the use and allocation of the IT resources and funds for the recovery Minimize the negative impacts and risks of a disruption or disaster on the IT systems and infrastructure Page 58 Answer: D www.certifiedumps.com
  • 59. Questions & Answers PDF A continuous information security-monitoring program can BEST reduce risk through which of the following? A. Collecting security events and correlating them to identify anomalies B. Facilitating system-wide visibility into the activities of critical user accounts C. Encompassing people, process, and technology D. Logging both scheduled and unscheduled system changes Maximize the positive outcomes and benefits of the recovery for the IT systems and infrastructure Support and enable the achievement of the organizational goals and targets through the IT systems and infrastructure The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them. Explanation: A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as: Improving the security and risk management of the system or network by identifying and addressing the security weaknesses and gaps Page 59 Question: 45 Answer: C www.certifiedumps.com
  • 60. Questions & Answers PDF Page 60 Enhancing the security and decision making of the system or network by providing the evidence and information for the security analysis, evaluation, and reporting Increasing the security and improvement of the system or network by providing the feedback and input for the security response, remediation, and optimization Facilitating the compliance and alignment of the system or network with the internal or external requirements and standards A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following: People: the human resources that are involved in the continuous information security monitoring program, such as the security analysts, the system administrators, the management, and the users. People are responsible for defining the security objectives and requirements, implementing and operating the security tools and controls, and monitoring and responding to the security events and incidents. Process: the procedures and policies that are followed in the continuous information security monitoring program, such as the security standards and guidelines, the security roles and responsibilities, the security workflows and tasks, and the security metrics and indicators. Process is responsible for establishing and maintaining the security governance and compliance, ensuring the security consistency and efficiency, and measuring and evaluating the security performance and effectiveness. Technology: the tools and systems that are used in the continuous information security monitoring program, such as the security sensors and agents, the security loggers and collectors, the security analyzers and correlators, and the security dashboards and reports. Technology is responsible for supporting and enabling the security functions and capabilities, providing the security visibility and awareness, and delivering the security data and information. The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. www.certifiedumps.com
  • 61. Questions & Answers PDF Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation. A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended? A. Least privilege B. Privilege escalation C. Defense in depth D. Privilege bracketing Explanation: The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse. Java implements the principle of least privilege through its security model, which consists of several components, such as: The Java Virtual Machine (JVM): a software layer that executes the Java bytecode and provides an abstraction from the underlying hardware and operating system. The JVM enforces the security rules and restrictions on the Java programs, such as the memory protection, the bytecode verification, and the exception handling. The Java Security Manager: a class that defines and controls the security policy and permissions for the Java programs. The Java Security Manager can be configured and customized by the system administrator or the user, and can grant or deny the access or actions of the Java programs, such as Page 61 Topic 8, Software Development Security Question: 46 Answer: A www.certifiedumps.com
  • 62. Questions & Answers PDF Which of the following is the PRIMARY risk with using open source software in a commercial software construction? the file I/O, the network communication, or the system properties. The Java Security Policy: a file that specifies the security permissions for the Java programs, based on the code source and the code signer. The Java Security Policy can be defined and modified by the system administrator or the user, and can assign different levels of permissions to different Java programs, such as the trusted or the untrusted ones. The Java Security Sandbox: a mechanism that isolates and restricts the Java programs that are downloaded or executed from untrusted sources, such as the web or the network. The Java Security Sandbox applies the default or the minimal security permissions to the untrusted Java programs, and prevents them from accessing or modifying the local resources or data, such as the files, the databases, or the registry. In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy. The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions. Page 62 Question: 47 www.certifiedumps.com
  • 63. Explanation: Questions & Answers PDF A. Lack of software documentation B. License agreements requiring release of modified code C. Expiration of the license agreement D. Costs associated with support of the software The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported. One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software. Some of the common types of license agreements for open source software are: Permissive licenses: license agreements that allow the developers and users to freely use, modify, and distribute the open source software, with minimal or no restrictions. Examples of permissive licenses are the MIT License, the Apache License, or the BSD License. Copyleft licenses: license agreements that require the developers and users to share and distribute the open source software and any modifications or derivatives of it, under the same or compatible license terms and conditions. Examples of copyleft licenses are the GNU General Public License (GPL), the GNU Lesser General Public License (LGPL), or the Mozilla Public License (MPL). Mixed licenses: license agreements that combine the elements of permissive and copyleft licenses, and may apply different license terms and conditions to different parts or components of the open source software. Examples of mixed licenses are the Eclipse Public License (EPL), the Common Development and Distribution License (CDDL), or the GNU Affero General Public License (AGPL). The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any Page 63 Answer: B www.certifiedumps.com
  • 64. Questions & Answers PDF When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined? A. After the system preliminary design has been developed and the data security categorization has been performed B. After the vulnerability analysis has been performed and before the system detailed design begins C. After the system preliminary design has been developed and before the data security categorization begins D. After the business functional analysis and the data security categorization have been performed Explanation: Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as: Page 64 modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it. The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiratio of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options. Question: 48 Answer: D www.certifiedumps.com
  • 65. Questions & Answers PDF System initiation: This phase involves defining the scope, purpose, and objectives of the system, identifying the stakeholders and their needs and expectations, and establishing the project plan and budget. System acquisition and development: This phase involves designing the architecture and components of the system, selecting and procuring the hardware and software resources, developing and coding the system functionality and features, and integrating and testing the system modules and interfaces. System implementation: This phase involves deploying and installing the system to the production environment, migrating and converting the data and applications from the legacy system, training and educating the users and staff on the system operation and maintenance, and evaluating and validating the system performance and effectiveness. System operations and maintenance: This phase involves operating and monitoring the system functionality and availability, maintaining and updating the system hardware and software, resolving and troubleshooting any issues or problems, and enhancing and optimizing the system features and capabilities. Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes. The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed Page 65 www.certifiedumps.com
  • 66. Which of the following is the BEST method to prevent malware from being introduced into a production environment? A. Purchase software from a limited list of retailers B. Verify the hash key or certificate key of all updates C. Do not permit programs, patches, or updates from the Internet D. Test all new software in a segregated environment Explanation: Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as: Preventing the infection or propagation of malware to the production environment Detecting and resolving any issues or risks caused by the software Ensuring the compatibility and interoperability of the software with the production environment Supporting and enabling the quality assurance and improvement of the software The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or Questions & Answers PDF Page 66 has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned. Question: 49 Answer: D www.certifiedumps.com
  • 67. Explanation: The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)? A. System acquisition and development B. System operations and maintenance C. System initiation D. System implementation The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives Questions & Answers PDF Page 67 distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares. Question: 50 Answer: A www.certifiedumps.com
  • 68. Questions & Answers PDF Page 68 and activities, such as: System initiation: This phase involves defining the scope, purpose, and objectives of the system, identifying the stakeholders and their needs and expectations, and establishing the project plan and budget. System acquisition and development: This phase involves designing the architecture and components of the system, selecting and procuring the hardware and software resources, developing and coding the system functionality and features, and integrating and testing the system modules and interfaces. System implementation: This phase involves deploying and installing the system to the production environment, migrating and converting the data and applications from the legacy system, training and educating the users and staff on the system operation and maintenance, and evaluating and validating the system performance and effectiveness. System operations and maintenance: This phase involves operating and monitoring the system functionality and availability, maintaining and updating the system hardware and software, resolving and troubleshooting any issues or problems, and enhancing and optimizing the system features and capabilities. The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as: Security categorization: This task involves determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Security planning: This task involves defining the security objectives and requirements of the system, identifying the roles and responsibilities of the security stakeholders, and developing and documenting the security plan and policy. Security implementation: This task involves implementing and enforcing the security controls and measures for the system, according to the security plan and policy, and ensuring the security functionality and compatibility of the system. Security assessment: This task involves evaluating and testing the security effectiveness and compliance of the system, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps. Security authorization: This task involves reviewing and approving the security assessment results and recommendations, and granting or denying the authorization for the system operation and www.certifiedumps.com
  • 69. Questions & Answers PDF maintenance, based on the risk and impact analysis and the security objectives and requirements. Security monitoring: This task involves monitoring and updating the security status and activities of the system, using various methods and tools, such as logs, alerts, or reports, and addressing and resolving any security issues or changes. The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as: Improving the quality and security of the system design and development by identifying and addressing any errors or inconsistencies Enhancing the performance and efficiency of the system design and development by optimizing the use and allocation of the system components and resources Increasing the compliance and alignment of the system design and development with the security objectives and requirements by applying and enforcing the security controls and measures Facilitating the monitoring and improvement of the system design and development by providing the evidence and information for the security assessment and authorization The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements. Page 69 www.certifiedumps.com
  • 70. Questions & Answers PDF What is the BEST approach to addressing security issues in legacy web applications? A. Debug the security issues B. Migrate to newer, supported applications where possible C. Conduct a security assessment D. Protect the legacy application with a web application firewall Explanation: Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as: Vulnerabilities and bugs that are not fixed or patched by the developers or vendors Weak or obsolete encryption and authentication mechanisms that are easily broken or bypassed by attackers Lack of compliance with the security policies and regulations that are applicable to the web applications Incompatibility or interoperability issues with the newer web browsers, operating systems, or platforms that are used by the users or clients Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as: Enhancing the security and performance of the web applications by using the latest technologies and standards that are more secure and efficient Reducing the risk and impact of the web application attacks by eliminating or minimizing the vulnerabilities and bugs that are present in the legacy web applications Increasing the compliance and alignment of the web applications with the security policies and regulations that are applicable to the web applications Improving the compatibility and interoperability of the web applications with the newer web Page 70 Question: 51 Answer: B www.certifiedumps.com
  • 71. Explanation: Which of the following methods protects Personally Identifiable Information (PII) by use of a full replacement of the data element? A. Transparent Database Encryption (TDE) B. Column level database encryption C. Volume encryption D. Data tokenization Questions & Answers PDF Page 71 browsers, operating systems, or platforms that are used by the users or clients The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms. Topic 9, Exam Set A Question: 52 Answer: D www.certifiedumps.com
  • 72. Which of the following elements MUST a compliant EU-US Safe Harbor Privacy Policy contain? A. An explanation of how long the data subject's collected information will be retained for and how it will be eventually disposed. B. An explanation of who can be contacted at the organization collecting the information if corrections are required by the data subject. C. An explanation of the regulatory frameworks and compliance standards the information collecting organization adheres to. D. An explanation of all the technologies employed by the collecting organization in gathering information on the data subject. Explanation: The EU-US Safe Harbor Privacy Policy is a framework that was established in 2000 to enable the transfer of personal data from the European Union to the United States, while ensuring adequate protection of the data subject’s privacy rights3. The framework was invalidated by the European Court of Justice in 2015, and replaced by the EU-US Privacy Shield in 20164. However, the Safe Harbor Privacy Policy still serves as a reference for the principles and requirements of data protection across the Atlantic. One of the elements that a compliant Safe Harbor Privacy Policy must contain is an explanation of who can be contacted at the organization collecting the information if corrections are required by the data subject. This is part of the principle of access, which states that individuals must have access to their personal information and be able to correct, amend, or delete it where it is inaccurate. Reference: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2954: CISSP For Dummies, 7th Edition, Chapter 10, page 284. : Official (ISC)2 CISSP CBK Reference, 5th Questions & Answers PDF Page 72 Data tokenization is a method of protecting PII by replacing the sensitive data element with a non- sensitive equivalent, called a token, that has no extrinsic or exploitable meaning or value1. The token is then mapped back to the original data element in a secure database. This way, the PII is not exposed in the data processing or storage, and only authorized parties can access the original data element. Data tokenization is different from encryption, which transforms the data element into a ciphertext that can be decrypted with a key. Data tokenization does not require a key, and the token cannot be reversed to reveal the original data element2. Reference: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2812: CISSP For Dummies, 7th Edition, Chapter 10, page 289. Question: 53 Answer: B www.certifiedumps.com
  • 73. Questions & Answers PDF A. overcome the problems of key assignments. B. monitor the opening of windows and doors. C. trigger alarms when intruders are detected. D. lock down a facility during an emergency. The PRIMARY purpose of a security awareness program is to A. ensure that everyone understands the organization's policies and procedures. B. communicate that access to information will be granted on a need-to-know basis. C. warn all users that access to all systems will be monitored on a daily basis. D. comply with regulations related to data and information protection. As one component of a physical security system, an Electronic Access Control (EAC) token is BEST known for its ability to Explanation: The primary purpose of a security awareness program is to ensure that everyone understands the organization’s policies and procedures related to information security. A security awareness program is a set of activities, materials, or events that aim to educate and inform the employees, contractors, partners, and customers of the organization about the security goals, principles, and practices of the organization1. A security awareness program can help to create a security culture, improve the security behavior, and reduce the human errors or risks. Communicating that access to information will be granted on a need-to-know basis, warning all users that access to all systems will be monitored on a daily basis, and complying with regulations related to data and information protection are not the primary purposes of a security awareness program, as they are more specific or secondary objectives that may be part of the program, but not the main goal. Reference: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 28. Page 73 Question: 54 Question: 55 Answer: A www.certifiedumps.com
  • 74. Questions & Answers PDF Which one of the following is a fundamental objective in handling an incident? A. To restore control of the affected systems B. To confiscate the suspect's computers C. To prosecute the attacker D. To perform full backups of the system Explanation: A fundamental objective in handling an incident is to restore control of the affected systems as soon as possible. An incident is an event or a situation that violates or threatens the security, confidentiality, integrity, or availability of an organization’s information assets or resources3. Handling an incident is the process of responding to, containing, analyzing, recovering from, and reporting on an incident, with the aim of minimizing the impact and preventing the recurrence of the incident. Restoring control of the affected systems is a crucial objective in handling an incident, as it can help to resume the normal operations, services, and functions of the organization, and to Explanation: An Electronic Access Control (EAC) token is best known for its ability to overcome the problems of key assignments in a physical security system. An EAC token is a device that can be used to authenticate a user or grant access to a physical area or resource, such as a door, a gate, or a locker2. An EAC token can be a smart card, a magnetic stripe card, a proximity card, a key fob, or a biometric device. An EAC token can overcome the problems of key assignments, which are the issues or challenges of managing and distributing physical keys to authorized users, such as lost, stolen, duplicated, or unreturned keys. An EAC token can provide more security, convenience, and flexibility than a physical key, as it can be easily activated, deactivated, or replaced, and it can also store additional information or perform other functions. Monitoring the opening of windows and doors, triggering alarms when intruders are detected, and locking down a facility during an emergency are not the abilities that an EAC token is best known for, as they are more related to the functions of other components of a physical security system, such as sensors, alarms, or locks. Reference: 2: CISSP For Dummies, 7th Edition, Chapter 9, page 253. Page 74 Question: 56 Answer: A Answer: A www.certifiedumps.com
  • 75. A. Communication B. Planning C. Recovery D. Escalation A. user to the audit process. B. computer system to the user. C. user's access to all authorized objects. The process of mutual authentication involves a computer system authenticating a user and authenticating the In the area of disaster planning and recovery, what strategy entails the presentation of information about the plan? Explanation: Communication is the strategy that involves the presentation of information about the disaster recovery plan to the stakeholders, such as management, employees, customers, vendors, and regulators. Communication ensures that everyone is aware of their roles and responsibilities in the event of a disaster, and that the plan is updated and tested regularly12. Reference: 1: CISSP All-in- One Exam Guide, Eighth Edition, Chapter 10, page 10192: CISSP For Dummies, 7th Edition, Chapter 10, page 343. Questions & Answers PDF Page 75 mitigate the damage or loss caused by the incident. Confiscating the suspect’s computers, prosecuting the attacker, and performing full backups of the system are not fundamental objectives in handling an incident, as they are more related to the investigation, legal, or recovery aspects of the incident, which may not be as urgent or essential as restoring control of the affected systems. Reference: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 7, page 375. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9, page 559. Question: 57 Question: 58 Answer: A www.certifiedumps.com
  • 76. A. Program change control B. Regression testing C. Export exception control D. User acceptance testing Questions & Answers PDF D. computer system to the audit process. Which one of the following describes granularity? A. Maximum number of entries available in an Access Control List (ACL) B. Fineness to which a trusted system can authenticate users What maintenance activity is responsible for defining, implementing, and testing updates to application systems? Explanation: Program change control is the maintenance activity that is responsible for defining, implementing, and testing updates to application systems. Program change control ensures that the changes are authorized, documented, reviewed, tested, and approved before being deployed to the production environment. Program change control also maintains a record of the changes and their impact on the system . Reference: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 823. : CISSP For Dummies, 7th Edition, Chapter 8, page 263. Explanation: Mutual authentication is the process of verifying the identity of both parties in a communication. The computer system authenticates the user by verifying their credentials, such as username and password, biometrics, or tokens. The user authenticates the computer system by verifying its identity, such as a digital certificate, a trusted third party, or a challenge-response mechanism34. Reference: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5154: CISSP For Dummies, 7th Edition, Chapter 5, page 151. Page 76 Question: 59 Question: 60 Answer: B Answer: A www.certifiedumps.com
  • 77. Questions & Answers PDF C. Number of violations divided by the number of total accesses D. Fineness to which an access control system can be adjusted An organization is selecting a service provider to assist in the consolidation of multiple computing sites including development, implementation and ongoing support of various computer systems. Which of the following MUST be verified by the Information Security Department? Explanation: Granularity is the degree of detail or precision that an access control system can provide. A granular access control system can specify different levels of access for different users, groups, resources, or conditions. For example, a granular firewall can allow or deny traffic based on the source, destination, port, protocol, time, or other criteria Explanation: The Information Security Department must verify that the service provider will impose controls and protections that meet or exceed the current systems controls and produce audit logs as verification. This is to ensure that the service provider will maintain or improve the security posture of the organization, and that the organization will be able to monitor and audit the service provider’s performance and compliance. The service provider’s policies may or may not be consistent with ISO/IEC27001, but this is not a mandatory requirement, as long as the service provider can meet the organization’s security needs and expectations. The service provider may or may not A. The service provider's policies are consistent with ISO/IEC27001 and there is evidence that the service provider is following those policies. B. The service provider will segregate the data within its systems and ensure that each region's policies are met. C. The service provider will impose controls and protections that meet or exceed the current systems controls and produce audit logs as verification. D. The service provider's policies can meet the requirements imposed by the new environment even if they differ from the organization's current policies. Page 77 Question: 61 Answer: C Answer: D www.certifiedumps.com
  • 78. A. Signature B. Inference C. Induction D. Heuristic What technique BEST describes antivirus software that detects viruses by watching anomalous behavior? Which of the following is the FIRST action that a system administrator should take when it is revealed during a penetration test that everyone in an organization has unauthorized access to a server holding sensitive data? A. Immediately document the finding and report to senior management. B. Use system privileges to alter the permissions to secure the server C. Continue the testing to its completion and then inform IT management D. Terminate the penetration test and pass the finding to the server management team Explanation: Heuristic is the technique that best describes antivirus software that detects viruses by watching anomalous behavior. Heuristic is a method of virus detection that analyzes the behavior and characteristics of the program or file, rather than comparing it to a known signature or pattern. Heuristic can detect unknown or new viruses that have not been identified or cataloged by the antivirus software. However, heuristic can also generate false positives, as some legitimate programs or files may exhibit suspicious or unusual behavior12. Reference: 1: What is Heuristic Analysis?32: Heuristic Virus Detection4 Questions & Answers PDF Page 78 regulatory obligations. The service provider’s policies may differ from the organization’s current policies, as long as they can meet the requirements imposed by the new environment, and are agreed upon by both parties. Reference: 1: How to Choose a Managed Security Service Provider (MSSP)22: 10 Questions to Ask Your Managed Security Service Provider3 Question: 62 Question: 63 Answer: D www.certifiedumps.com
  • 79. Questions & Answers PDF Explanation: This is the principle of open design, which states that the security of a system or mechanism should rely on the strength of its key or algorithm, rather than on the obscurity of its design or implementation. This principle is based on the assumption that the adversary has full knowledge of the system or mechanism, and that the security should still hold even if that is the case. The other options are not consistent with the principle of open design, as they either imply that the security depends on hiding or protecting the design or implementation (A and B), or that the user’s knowledge or privileges affect the security ©. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109. Which of the following BEST represents the principle of open design? A. Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer system. B. Algorithms must be protected to ensure the security and interoperability of the designed system. C. A knowledgeable user should have limited privileges on the system to prevent their ability to compromise security capabilities. D. The security of a mechanism should not depend on the secrecy of its design or implementation. Explanation: If a system administrator discovers a serious security breach during a penetration test, such as unauthorized access to a server holding sensitive data, the first action that he or she should take is to immediately document the finding and report it to senior management. This is because senior management is ultimately responsible for the security of the organization and its assets, and they need to be aware of the situation and take appropriate actions to mitigate the risk and prevent further damage. Documenting the finding is also important to provide evidence and support for the report, and to comply with any legal or regulatory requirements. Using system privileges to alter the permissions to secure the server, continuing the testing to its completion, or terminating the penetration test and passing the finding to the server management team are not the first actions that a system administrator should take, as they may not address the root cause of the problem, may interfere with the ongoing testing, or may delay the notification of senior management. Page 79 Question: 64 Question: 65 Answer: A Answer: D www.certifiedumps.com
  • 80. Explanation: Questions & Answers PDF A security consultant has been asked to research an organization's legal obligations to protect privacy-related information. What kind of reading material is MOST relevant to this project? A. The organization's current security policies concerning privacy issues B. Privacy-related regulations enforced by governing bodies applicable to the organization C. Privacy best practices published by recognized security standards organizations D. Organizational procedures designed to protect privacy information According to best practice, which of the following groups is the MOST effective in performing an information security compliance audit? A. In-house security administrators B. In-house Network Team C. Disaster Recovery (DR) Team D. External consultants Explanation: The most relevant reading material for researching an organization’s legal obligations to protect privacy-related information is the privacy-related regulations enforced by governing bodies applicable to the organization. These regulations define the legal requirements, standards, and penalties for collecting, processing, storing, and disclosing personal or sensitive information of individuals or entities. The organization must comply with these regulations to avoid legal liabilities, fines, or sanctions. The other options are not as relevant as privacy-related regulations, as they either do not reflect the legal obligations of the organization (A and C), or do not apply to all types of privacy-related information (D). Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 22; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31. Page 80 Question: 66 Answer: B Answer: D Topic 10, Exam Set B www.certifiedumps.com
  • 81. An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers having digital certificates. What is the security benefit of this implementation? A. Clients can authenticate themselves to the servers. B. Mutual authentication is available between the clients and servers. C. Servers are able to issue digital certificates to the client. D. Servers can authenticate themselves to the client. Explanation: A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates, which are electronic documents that bind a public key to an identity. A digital certificate can be used to authenticate the identity of an entity, such as a person, a device, or a server, that possesses the corresponding private key. An organization can implement a partial PKI with only the servers having digital certificates, which means that only the servers can prove their identity to the clients, but not vice versa. The security benefit of this implementation is that servers can authenticate themselves to the client, which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers. Clients can authenticate themselves to the servers, mutual authentication is available between the clients and servers, and servers are able to issue digital certificates to the client are not the security benefits of this implementation, as they require the clients to have digital certificates as well. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 631. Questions & Answers PDF Page 81 According to best practice, the most effective group in performing an information security compliance audit is external consultants. External consultants are independent and objective third parties that can provide unbiased and impartial assessment of the organization’s compliance with the security policies, standards, and regulations. External consultants can also bring expertise, experience, and best practices from other organizations and industries, and offer recommendations for improvement. The other options are not as effective as external consultants, as they either have a conflict of interest or lack of independence (A and B), or do not have the primary role or responsibility of conducting compliance audits ©. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302. Question: 67 Question: 68 Answer: D www.certifiedumps.com
  • 82. Questions & Answers PDF When implementing a secure wireless network, which of the following supports authentication and authorization for individual client endpoints. A. Temporal Key Integrity Protocol (TKIP) B. Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK) C. Wi-Fi Protected Access 2 (WPA2) Enterprise D. Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) A thorough review of an organization's audit logs finds that a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient. What type of attack has MOST likely occurred? Explanation: When implementing a secure wireless network, the option that supports authentication and authorization for individual client endpoints is Wi-Fi Protected Access 2 (WPA2) Enterprise. WPA2 is a security protocol that provides encryption and authentication for wireless networks, based on the IEEE 802.11i standard. WPA2 has two modes: Personal and Enterprise. WPA2 Personal uses a Pre- Shared Key (PSK) that is shared among all the devices on the network, and does not require a separate authentication server. WPA2 Enterprise uses an Extensible Authentication Protocol (EAP) that authenticates each device individually, using a username and password or a certificate, and requires a Remote Authentication Dial-In User Service (RADIUS) server or another authentication server. WPA2 Enterprise provides more security and granularity than WPA2 Personal, as it can support different levels of access and permissions for different users or groups, and can prevent unauthorized or compromised devices from joining the network. Temporal Key Integrity Protocol (TKIP), Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK), and Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) are not the options that support authentication and authorization for individual client endpoints, as they are related to the encryption or integrity of the wireless data, not the identity or access of the wireless devices. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 506. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 522. Page 82 Question: 69 Answer: C www.certifiedumps.com
  • 83. Questions & Answers PDF A. Spoofing B. Eavesdropping C. Man-in-the-middle D. Denial of service Which of the following is the MOST effective attack against cryptographic hardware modules? A. Plaintext B. Brute force C. Power analysis D. Man-in-the-middle (MITM) Explanation: The type of attack that has most likely occurred when a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of attack that involves an attacker intercepting, modifying, or redirecting the communication between two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or impersonate one of the parties, to achieve malicious goals, such as stealing information, compromising security, or disrupting service. A MITM attack can be performed on various types of networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of service are not the types of attack that have most likely occurred in this scenario, as they do not involve the modification or manipulation of the communication between the parties, but rather the falsification, observation, or prevention of the communication. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478. Explanation: The most effective attack against cryptographic hardware modules is power analysis. Power analysis Page 83 Question: 70 Answer: C Answer: C www.certifiedumps.com
  • 84. Questions & Answers PDF Which of the following is the MOST difficult to enforce when using cloud computing? A. Data access B. Data backup C. Data recovery D. Data disposal is a type of side-channel attack that exploits the physical characteristics or behavior of a cryptographic device, such as a smart card, a hardware security module, or a cryptographic processor, to extract secret information, such as keys, passwords, or algorithms. Power analysis measures the power consumption or the electromagnetic radiation of the device, and analyzes the variations or patterns that correspond to the cryptographic operations or the data being processed. Power analysis can reveal the internal state or the logic of the device, and can bypass the security mechanisms or the tamper resistance of the device. Power analysis can be performed with low-cost and widely available equipment, and can be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM) are not the most effective attacks against cryptographic hardware modules, as they are related to the encryption or transmission of the data, not the physical properties or behavior of the device. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644. Explanation: The most difficult thing to enforce when using cloud computing is data disposal. Data disposal is the process of permanently deleting or destroying the data that is no longer needed or authorized, in a secure and compliant manner. Data disposal is challenging when using cloud computing, because the data may be stored or replicated in multiple locations, devices, or servers, and the cloud provider may not have the same policies, procedures, or standards as the cloud customer. Data disposal may also be affected by the legal or regulatory requirements of different jurisdictions, or the contractual obligations of the cloud service agreement. Data access, data backup, and data recovery are not the most difficult things to enforce when using cloud computing, as they can be achieved by using encryption, authentication, authorization, replication, or restoration techniques, and by specifying the service level agreements and the roles and responsibilities of the cloud provider and the cloud customer. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 353. Page 84 Question: 71 Answer: D www.certifiedumps.com
  • 85. Explanation: Questions & Answers PDF DRAG DROP Given the various means to protect physical and logical assets, match the access management area to the technology. Facilities - Window Devices - Firewall Information Systems - Authentication (Note: “Encryption” is not matched as it can be applied in various areas including Devices and Information Systems.) In the context of protecting physical and logical assets, the access management areas and the technologies can be matched as follows: - Facilities are the physical buildings or locations that house the organization’s assets, such as servers, computers, or documents. Facilities can be protected by using windows that are resistant to breakage, intrusion, or eavesdropping, and that can prevent the leakage of light or sound from inside the facilities. - Devices are the hardware or software components that enable the communication or processing of data, such as routers, switches, firewalls, or applications. Devices can be protected by using firewalls that can filter, block, or allow the network traffic based on the predefined rules or policies, and that can prevent unauthorized or malicious access or attacks to the devices or the data. - Information Systems are the systems that store, process, or transmit data, such as databases, servers, or applications. Information Systems can be protected by using authentication mechanisms that can verify the identity or the credentials of the users or the devices that request access to the information systems, and that can prevent impersonation or spoofing of the users or the devices. - Encryption is a technology that can be applied in various areas, such as Devices or Information Systems, to protect the confidentiality or the integrity of the data. Encryption can transform the data into an unreadable or unrecognizable form, Page 85 Question: 72 Answer: www.certifiedumps.com
  • 86. Refer to the information below to answer the question. A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns. What is the BEST reason for the organization to pursue a plan to mitigate client-based attacks? A. Client privilege administration is inherently weaker than server privilege administration. B. Client hardening and management is easier on clients than on servers. C. Client-based attacks are more common and easier to exploit than server and network based attacks. D. Client-based attacks have higher financial impact. Explanation: The best reason for the organization to pursue a plan to mitigate client-based attacks is that client- based attacks are more common and easier to exploit than server and network based attacks. Client- based attacks are the attacks that target the client applications or systems, such as web browsers, email clients, or media players, and that can exploit the vulnerabilities or weaknesses of the client software or configuration, or the user behavior or interaction. Client-based attacks are more common and easier to exploit than server and network based attacks, because the client applications or systems are more exposed and accessible to the attackers, the client software or configuration is more diverse and complex to secure, and the user behavior or interaction is more unpredictable and prone to errors or mistakes. Therefore, the organization needs to pursue a plan to mitigate client- based attacks, as they pose a significant security threat or risk to the organization’s data, systems, or network. Client privilege administration is inherently weaker than server privilege administration, client hardening and management is easier on clients than on servers, and client-based attacks have higher financial impact are not the best reasons for the organization to pursue a plan to mitigate client-based attacks, as they are not supported by the facts or evidence, or they are not relevant or specific to the client-side security. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1066. Questions & Answers PDF Page 86 using a secret key or an algorithm, and can prevent the interception, disclosure, or modification of the data by unauthorized parties. Question: 73 Answer: C www.certifiedumps.com
  • 87. Questions & Answers PDF A. vulnerabilities are proactively identified. B. audits are regularly performed and reviewed. C. backups are regularly performed and validated. D. risk is lowered to an acceptable level. Refer to the information below to answer the question. An organization has hired an information security officer to lead their security department. The officer has adequate people resources but is lacking the other necessary components to have an effective security program. There are numerous initiatives requiring security involvement. The security program can be considered effective when Refer to the information below to answer the question. In a Multilevel Security (MLS) system, the following sensitivity labels are used in increasing levels of sensitivity: restricted, confidential, secret, top secret. Table A lists the clearance levels for four users, while Table B lists the security classes of four different files. Explanation: The security program can be considered effective when the risk is lowered to an acceptable level. The risk is the possibility or the likelihood of a threat exploiting a vulnerability, and causing a negative impact or a consequence to the organization’s assets, operations, or objectives. The security program is a set of activities and initiatives that aim to protect the organization’s information systems and resources from the security threats and risks, and to support the organization’s business needs and requirements. The security program can be considered effective when it achieves its goals and objectives, and when it reduces the risk to a level that is acceptable or tolerable by the organization, based on its risk appetite or tolerance. Vulnerabilities are proactively identified, audits are regularly performed and reviewed, and backups are regularly performed and validated are not the criteria to measure the effectiveness of the security program, as they are related to the methods or the processes of the security program, not the outcomes or the results of the security program. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 24. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 39. Page 87 Question: 74 Question: 75 Answer: D www.certifiedumps.com
  • 88. Questions & Answers PDF In a Bell-LaPadula system, which user cannot write to File 3? A. User A B. User B C. User C D. User D Refer to the information below to answer the question. A large, multinational organization has decided to outsource a portion of their Information Technology (IT) organization to a third-party provider’s facility. This provider will be responsible for the design, development, testing, and support of several critical, customer-based applications used by the organization. What additional considerations are there if the third party is located in a different country? Explanation: In a Bell-LaPadula system, a user cannot write data to a file that has a lower security classification than their own. This is because of the star property (*property) of the Bell-LaPadula model, which states that a subject with a given security clearance may write data to an object if and only if the object’s security level is greater than or equal to the subject’s security level. This rule is also known as the no write-down rule, as it prevents the leakage of information from a higher level to a lower level. In this question, User D has a Top Secret clearance, and File 3 has a Secret security class. Therefore, User D cannot write to File 3, as they have a higher clearance than the security class of File 3, and they would violate the star property by writing down information to a lower level. User A, User B, and User C can write to File 3, as they have the same or lower clearances than the security class of File 3, and they would not violate the star property by writing up or across information to a higher or equal level. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 498. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 514. Page 88 Question: 76 Answer: D www.certifiedumps.com
  • 89. HOTSPOT Questions & Answers PDF A. The organizational structure of the third party and how it may impact timelines within the organization B. The ability of the third party to respond to the organization in a timely manner and with accurate information C. The effects of transborder data flows and customer expectations regarding the storage or processing of their data D. The quantity of data that must be provided to the third party and how it is to be used Explanation: The additional considerations that are there if the third party is located in a different country are the effects of transborder data flows and customer expectations regarding the storage or processing of their data. Transborder data flows are the movements or the transfers of data across the national or the regional borders, such as the internet, the cloud, or the outsourcing. Transborder data flows can have various effects on the security, the privacy, the compliance, or the sovereignty of the data, depending on the laws, the regulations, the standards, or the cultures of the different countries or regions involved. Customer expectations are the beliefs or the assumptions of the customers about the quality, the performance, or the satisfaction of the products or the services that they use or purchase. Customer expectations can vary depending on the needs, the preferences, or the values of the customers, and they can influence the reputation, the loyalty, or the profitability of the organization. The organization should consider the effects of transborder data flows and customer expectations regarding the storage or processing of their data, as they can affect the security, the privacy, the compliance, or the sovereignty of the data, and they can impact the reputation, the loyalty, or the profitability of the organization. The organization should also consider the legal, the contractual, the ethical, or the cultural implications of the transborder data flows and customer expectations, and they should communicate, negotiate, or align with the third party and the customers accordingly. The organization should not consider the organizational structure of Page 89 Question: 77 Answer: C www.certifiedumps.com
  • 90. Questions & Answers PDF DRAG DROP Place the following information classification steps in sequential order. Identify the component that MOST likely lacks digital accountability related to information access. Click on the correct device in the image below. Explanation: Storage Area Network (SAN): SANs are designed for centralized storage and access control mechanisms can be implemented to track users and their activities. Backup Media, Backup Server, Database Server, Web Server: These are typically secured components within a system and would likely have user authentication and access logging mechanisms. Laptop: Laptops are portable and might be used outside the organization's secure network. They may not have the same level of monitoring and logging as the other components, making it more difficult to hold users accountable for their access to information. Page 90 Question: 78 Answer: www.certifiedumps.com
  • 91. Questions & Answers PDF Explanation: The following information classification steps should be placed in sequential order as follows: Document the information assets Assign a classification level Apply the appropriate security markings Conduct periodic classification reviews Declassify information when appropriate Page 91 Answer: www.certifiedumps.com
  • 92. Questions & Answers PDF Information classification is a process or a method of categorizing the information assets based on their sensitivity, criticality, or value, and applying the appropriate security controls or measures to protect them. Information classification can help to ensure the confidentiality, the integrity, and the availability of the information assets, and to support the security, the compliance, or the business objectives of the organization. The information classification steps are the activities or the tasks that are involved in the information classification process, and they should be performed in a sequential order, as follows: Document the information assets: This step involves identifying, inventorying, and describing the information assets that are owned, used, or managed by the organization, such as the data, the documents, the records, or the media. This step can help to determine the scope, the ownership, or the characteristics of the information assets, and to prepare for the next steps of the information classification process. Assign a classification level: This step involves assigning a classification level or a label to each information asset, based on the sensitivity, the criticality, or the value of the information asset, and the impact or the consequence of the unauthorized or the malicious access, disclosure, modification, or destruction of the information asset. The classification level or the label can indicate the degree or the extent of the security protection or the handling that the information asset requires, such as the confidentiality, the integrity, or the availability. The classification level or the label can vary depending on the organization’s policies, standards, or regulations, but some common examples are public, internal, confidential, or secret. Apply the appropriate security markings: This step involves applying the appropriate security markings or indicators to the information assets, based on the classification level or the label of the information assets. The security markings or indicators can include the visual, the physical, or the electronic symbols, signs, or codes that show the classification level or the label of the information assets, such as the banners, the headers, the footers, the stamps, the stickers, the tags, or the metadata. The security markings or indicators can help to communicate, inform, or remind the users or the entities of the security protection or the handling that the information assets require, and to prevent or reduce the risk of the unauthorized or the malicious access, disclosure, modification, or destruction of the information assets. Conduct periodic classification reviews: This step involves conducting periodic classification reviews or assessments of the information assets, to ensure that the classification level or the label and the security markings or indicators of the information assets are accurate, consistent, and up-to-date. The periodic classification reviews or assessments can be triggered by the changes or the events that affect the sensitivity, the criticality, or the value of the information assets, such as the business needs, the legal requirements, the security incidents, or the data lifecycle. The periodic classification reviews or assessments can help to verify, validate, or update the classification level or the label and the security markings or indicators of the information assets, and to maintain or improve the security protection or the handling of the information assets. Declassify information when appropriate: This step involves declassifying or downgrading the Page 92 www.certifiedumps.com
  • 93. What does secure authentication with logging provide? A. Data integrity B. Access accountability C. Encryption logging format D. Segregation of duties Which of the following provides the minimum set of privileges required to perform a job function and restricts the user to a domain with the required privileges? A. Access based on rules B. Access based on user's role C. Access determined by the system D. Access based on data sensitivity Explanation: Secure authentication with logging provides access accountability, which means that the actions of users can be traced and audited. Logging can help identify unauthorized or malicious activities, enforce policies, and support investigations12 Questions & Answers PDF Page 93 information assets when appropriate, to reduce or remove the security protection or the handling that the information assets require, based on the sensitivity, the criticality, or the value of the information assets, and the impact or the consequence of the unauthorized or the malicious access, disclosure, modification, or destruction of the information assets. The declassification or the downgrade of the information assets can be triggered by the changes or the events that affect the sensitivity, the criticality, or the value of the information assets, such as the expiration, the disposal, the release, or the transfer of the information assets. The declassification or the downgrade of the information assets can help to optimize, balance, or streamline the security protection or the handling of the information assets, and to support the security, Question: 79 Question: 80 Answer: B Topic 11, Exam Set C www.certifiedumps.com
  • 94. Questions & Answers PDF A. data classification labeling. B. page views within an application. C. authorizations granted to the user. D. management accreditation. Discretionary Access Control (DAC) restricts access according to Explanation: Discretionary Access Control (DAC) restricts access according to authorizations granted to the user. DAC is a type of access control that allows the owner or creator of a resource to decide who can access it and what level of access they can have. DAC uses access control lists (ACLs) to assign permissions to resources, and users can pass or change their permissions to other users Explanation: Access based on user’s role provides the minimum set of privileges required to perform a job function and restricts the user to a domain with the required privileges. This is also known as role- based access control (RBAC), which is a method of enforcing the principle of least privilege. RBAC assigns permissions to roles rather than individual users, and users are assigned roles based on their responsibilities and qualifications HOTSPOT In the network design below, where is the MOST secure Local Area Network (LAN) segment to deploy a Wireless Access Point (WAP) that provides contractors access to the Internet and authorized enterprise services? Page 94 Question: 81 Question: 82 Answer: B Answer: C www.certifiedumps.com
  • 95. Questions & Answers PDF Explanation: LAN 4 The most secure LAN segment to deploy a WAP that provides contractors access to the Internet and authorized enterprise services is LAN 4. A WAP is a device that enables wireless devices to connect to a wired network using Wi-Fi, Bluetooth, or other wireless standards. A WAP can provide convenience and mobility for the users, but it can also introduce security risks, such as unauthorized access, eavesdropping, interference, or rogue access points. Therefore, a WAP should be deployed in a secure LAN segment that can isolate the wireless traffic from the rest of the network and apply appropriate security controls and policies. LAN 4 is connected to the firewall that separates it from the other LAN segments and the Internet. This firewall can provide network segmentation, filtering, and monitoring for the WAP and the wireless devices. The firewall can also enforce the access rules Page 95 Answer: www.certifiedumps.com
  • 96. Explanation: Questions & Answers PDF DRAG DROP Match the objectives to the assessment questions in the governance domain of Software Assurance Maturity Model (SAMM). The correct matches are as follows: Secure Architecture -> Do you advertise shared security services with guidance for project teams? Education & Guidance -> Are most people tested to ensure a baseline skill-set for secure development practices? Strategy & Metrics -> Does most of the organization know about what’s required based on risk ratings? Vulnerability Management -> Are most project teams aware of their security point(s) of contact and response team(s)? Comprehensive Explanation: These matches are based on the definitions and objectives of the four governance domain practices in the Software Assurance Maturity Model (SAMM). SAMM is a framework to help organizations assess and improve their software security posture. The governance domain covers the organizational aspects of software security, such as policies, metrics, and roles. Secure Architecture: This practice aims to provide a consistent and secure design for software projects, as well as reusable security services and components. The assessment question measures the availability and guidance of these shared security services for project teams. and policies for the contractors, such as allowing them to access the Internet and some authorized enterprise services, but not the other LAN segments that may contain sensitive or critical data or systems34 Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 317; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 437. Page 96 Question: 83 Answer: www.certifiedumps.com
  • 97. Explanation: What is the PRIMARY difference between security policies and security procedures? A. Policies are used to enforce violations, and procedures create penalties B. Policies point to guidelines, and procedures are more contractual in nature C. Policies are included in awareness training, and procedures give guidance D. Policies are generic in nature, and procedures contain operational details Questions & Answers PDF Page 97 Education & Guidance: This practice aims to raise the awareness and skills of the staff involved in software development, as well as provide them with the necessary tools and resources. The assessment question measures the level of testing and verification of the staff’s secure development knowledge and abilities. Strategy & Metrics: This practice aims to define and communicate the software security strategy, goals, and priorities, as well as measure and monitor the progress and effectiveness of software security activities. The assessment question measures the degree of awareness and alignment of the organization with the risk-based requirements for software security. Vulnerability Management: This practice aims to identify and remediate the vulnerabilities in the software products, as well as prevent or mitigate the impact of potential incidents. The assessment question measures the level of awareness and collaboration of the project teams with the security point(s) of contact and response team(s). Reference: SAMM Governance Domain; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 452 Question: 84 Answer: D www.certifiedumps.com
  • 98. Which of the following is an advantage of on premise Credential Management Systems? A. Improved credential interoperability B. Control over system configuration C. Lower infrastructure capital costs D. Reduced administrative overhead Questions & Answers PDF Page 98 The primary difference between security policies and security procedures is that policies are generic in nature, and procedures contain operational details. Security policies are the high-level statements or rules that define the goals, objectives, and requirements of security for an organization. Security procedures are the low-level steps or actions that specify how to implement, enforce, and comply with the security policies. A. Policies are used to enforce violations, and procedures create penalties is not a correct answer, as it confuses the roles and functions of policies and procedures. Policies are used to create penalties, and procedures are used to enforce violations. Penalties are the consequences or sanctions that are imposed for violating the security policies, and they are defined by the policies. Enforcement is the process or mechanism of ensuring compliance with the security policies, and it is carried out by the procedures. B. Policies point to guidelines, and procedures are more contractual in nature is not a correct answer, as it misrepresents the nature and purpose of policies and procedures. Policies are not merely guidelines, but rather mandatory rules that bind the organization and its stakeholders to follow the security principles and standards. Procedures are not contractual in nature, but rather operational in nature, as they describe the specific tasks and activities that are necessary to achieve the security goals and objectives. C.Policies are included in awareness training, and procedures give guidance is not a correct answer, as it implies that policies and procedures have different audiences and functions. Policies and procedures are both included in awareness training, and they both give guidance. Awareness training is the process of educating and informing the organization and its stakeholders about the security policies and procedures, and their roles and responsibilities in security. Guidance is the process of providing direction and advice on how to comply with the security policies and procedures, and how to handle security issues and incidents. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 17; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 13 Question: 85 www.certifiedumps.com
  • 99. Questions & Answers PDF In order for a security policy to be effective within an organization, it MUST include A. strong statements that clearly define the problem. B. A list of all standards that apply to the policy. Explanation: The advantage of on premise credential management systems is that they provide more control over the system configuration and customization. On premise credential management systems are the systems that store and manage the credentials, such as usernames, passwords, tokens, or certificates, of the users or the devices within an organization’s own network or infrastructure. On premise credential management systems can offer more flexibility and security for the organization, as they can tailor the system to their specific needs and requirements, and they can enforce their own policies and standards for the credential management. A. Improved credential interoperability is not an advantage of on premise credential management systems, but rather an advantage of cloud-based credential management systems. Cloud-based credential management systems are the systems that store and manage the credentials of the users or the devices on a third-party cloud service provider’s network or infrastructure. Cloud-based credential management systems can offer more interoperability and scalability for the organization, as they can support different types of credentials and devices, and they can adjust to the changing demand and workload of the credential management. C.Lower infrastructure capital costs is not an advantage of on premise credential management systems, but rather an advantage of cloud-based credential management systems. Cloud-based credential management systems can reduce the infrastructure capital costs for the organization, as they do not require the organization to purchase, install, or maintain their own hardware or software for the credential management. Instead, the organization can pay a subscription fee or a usage fee to the cloud service provider for the credential management service. D.Reduced administrative overhead is not an advantage of on premise credential management systems, but rather an advantage of cloud-based credential management systems. Cloud-based credential management systems can reduce the administrative overhead for the organization, as they do not require the organization to perform the tasks or the functions related to the credential management, such as backup, recovery, patching, or updating. Instead, the cloud service provider can handle these tasks or functions for the organization, as part of the credential management service. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 346; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 307 Page 99 Question: 86 Answer: B www.certifiedumps.com
  • 100. Questions & Answers PDF C. owner information and date of last revision. D. disciplinary measures for non compliance. To protect auditable information, which of the following MUST be configured to only allow read access? A. Logging configurations B. Transaction log files C. User account configurations D. Access control lists (ACL) Explanation: In order for a security policy to be effective within an organization, it must include disciplinary measures for non compliance. A security policy is a document or a statement that defines and communicates the security goals, the objectives, or the expectations of the organization, and that provides the guidance or the direction for the security activities, the processes, or the functions of the organization. A security policy must include disciplinary measures for non compliance, which are the actions or the consequences that the organization will take or impose on the users or the devices that violate or disregard the security policy or the security rules. Disciplinary measures for non compliance can help ensure the effectiveness of the security policy, as they can deter or prevent the users or the devices from engaging in the behaviors or the practices that could jeopardize or undermine the security of the organization, and they can also enforce or reinforce the accountability or the responsibility of the users or the devices for the security of the organization. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 26 Explanation: To protect auditable information, transaction log files must be configured to only allow read access. Transaction log files are files that record and store the details or the history of the transactions or the activities that occur within a system or a database, such as the date, the time, the user, the action, or the outcome. Transaction log files are important for auditing purposes, as they can provide the evidence or the proof of the transactions or the activities that occur within a system or a database, and they can also support the recovery or the restoration of the system or the database in case of a failure or a corruption. To protect auditable information, transaction log files must be configured to Page 100 Question: 87 Answer: B Answer: D www.certifiedumps.com
  • 101. Explanation: Questions & Answers PDF A. Access is based on rules. B. Access is determined by the system. C. Access is based on user's role. D. Access is based on data sensitivity. Asecurity professional is asked to provide a solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction. Which of the following is the MOST effective solution? only allow read access, which means that only authorized users or devices can view or access the transaction log files, but they cannot modify, delete, or overwrite the transaction log files. This can prevent or reduce the risk of tampering, alteration, or destruction of the auditable information, and it can also ensure the integrity, the accuracy, or the reliability of the auditable information. A. Logging configurations are not the files that must be configured to only allow read access to protect auditable information, but rather the settings or the parameters that determine or control how the logging or the recording of the transactions or the activities within a system or a database is performed, such as the frequency, the format, the location, or the retention of the log files. Logging configurations can affect the quality or the quantity of the auditable information, but they are not the auditable information themselves. C.User account configurations are not the files that must be configured to only allow read access to protect auditable information, but rather the settings or the parameters that define or manage the user accounts or the identities of the users or the devices that access or use a system or a database, such as the username, the password, the role, or the permissions. User account configurations can affect the security or the access of the system or the database, but they are not the auditable information themselves. D.Access control lists (ACL) are not the files that must be configured to only allow read access to protect auditable information, but rather the data structures or the files that store and manage the access control rules or policies for a system or a resource, such as a file, a folder, or a network. An ACL specifies the permissions or the privileges that the users or the devices have or do not have for the system or the resource, such as read, write, execute, or delete. ACLs can affect the security or the access of the system or the resource, but they are not the auditable information themselves. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 197; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 354 Page 101 Question: 88 Answer: C www.certifiedumps.com
  • 102. Questions & Answers PDF The most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction is that access is based on user’s role. Access is based on user’s role is a type of access control or a protection mechanism or process that grants or denies the access or the permission to the resources or the data within a system or a service, based on the role or the function of the user or the device within an organization, such as the bank teller, the supervisor, or the manager. Access is based on user’s role can provide a high level of security or protection for the system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or permission to the resources or the data within the system or the service, by the user or the device that does not have the appropriate or the necessary role or function within the organization, such as the bank teller, the supervisor, or the manager. Access is based on user’s role can also provide the convenience or the ease of management for the system or the service, as it can simplify or streamline the access control or the protection mechanism or process, by assigning or applying the predefined or the preconfigured access or permission policies or rules to the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, rather than to the individual or the specific user or device within the organization, such as the John, the Mary, or the Bob. Access is based on user’s role is the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, as it can ensure or maintain the security or the quality of the transactions or the data within the system or the service, by limiting or restricting the access or the permission to the transactions or the data within the system or the service, based on the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, and by allowing or enabling the different or the additional access or permission to the transactions or the data within the system or the service, based on the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager. A. Access is based on rules is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, but rather a type of access control or a protection mechanism or process that grants or denies the access or the permission to the resources or the data within a system or a service, based on the rules or the conditions that are defined or specified by the system or the service, or by the administrator or the owner of the system or the service, such as the time, the location, or the frequency. Access is based on rules can provide a moderate level of security or protection for the system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or permission to the resources or the data within the system or the service, by the user or the device that does not meet or satisfy the rules or the conditions that are defined or specified by the system or the service, or by the administrator or the owner of the system or the service, such as the time, the location, or the frequency. However, access is based on rules is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, as it does not take into account or consider the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, and as it can be complex or difficult to define or specify the rules or the conditions that Page 102 www.certifiedumps.com
  • 103. Questions & Answers PDF are appropriate or suitable for the different or the various transactions or the data within the system or the service, such as the savings deposit transaction, the checking withdrawal transaction, or the loan application transaction. B. Access is determined by the system is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, but rather a type of access control or a protection mechanism or process that grants or denies the access or the permission to the resources or the data within a system or a service, based on the decision or the judgment of the system or the service, or of the algorithm or the program that is implemented or executed by the system or the service, such as the artificial intelligence, the machine learning, or the neural network. Access is determined by the system can provide a high level of security or protection for the system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or permission to the resources or the data within the system or the service, by the user or the device that is not approved or authorized by the system or the service, or by the algorithm or the program that is implemented or executed by the system or the service, such as the artificial intelligence, the machine learning, or the neural network. However, access is determined by the system is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, as it does not take into account or consider the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, and as it can be unpredictable or unreliable to rely or depend on the decision or the judgment of the system or the service, or of the algorithm or the program that is implemented or executed by the system or the service, such as the artificial intelligence, the machine learning, or the neural network, for the access control or the protection mechanism or process. D.Access is based on data sensitivity is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, but rather a type of access control or a protection mechanism or process that grants or denies the access or the permission to the resources or the data within a system or a service, based on the sensitivity or the classification of the resources or the data within the system or the service, such as the public, the confidential, or the secret. Access is based on data sensitivity can provide a moderate level of security or protection for the system or the service, as it can prevent or reduce the risk of unauthorized or inappropriate access or permission to the resources or the data within the system or the service, by the user or the device that does not have the appropriate or the necessary clearance or authorization to access or to handle the resources or the data within the system or the service, based on the sensitivity or the classification of the resources or the data within the system or the service, such as the public, the confidential, or the secret. However, access is based on data sensitivity is not the most effective solution that restricts a bank teller to only perform a savings deposit transaction but allows a supervisor to perform corrections after the transaction, as it does not take into account or consider the role or the function of the user or the device within the organization, such as the bank teller, the supervisor, or the manager, and as it can be complex or difficult to define or specify the sensitivity or the classification of the resources or the data within the Page 103 www.certifiedumps.com
  • 104. Explanation: Questions & Answers PDF DRAG DROP Match the name of access control model with its associated restriction. Drag each access control model to its appropriate restriction access on the right. The correct matches are as follows: Mandatory Access Control -> End user cannot set controls Discretionary Access Control (DAC) -> Subject has total control over objects Role Based Access Control (RBAC) -> Dynamically assigns permissions to particular duties based on job function Rule based access control -> Dynamically assigns roles to subjects based on criteria assigned by a custodian Explanation: The image shows a table with two columns. The left column lists four different types of Access Control Models, and the right column lists their associated restrictions. The correct matches are based on the definitions and characteristics of each Access Control Model, as explained below: Mandatory Access Control (MAC) is a type of access control that grants or denies access to an object system or the service, such as the transactions or the data that are related or relevant to the different or the various types or categories of the accounts or the customers within the system or the service, such as the savings account, the checking account, or the loan account, or the personal account, the business account, or the government account. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 147; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 212 Page 104 Question: 89 Answer: www.certifiedumps.com
  • 105. Explanation: Questions & Answers PDF In the Software Development Life Cycle (SDLC), maintaining accurate hardware and software inventories is a critical part of A. systems integration. B. risk management. C. quality assurance. D. change management. based on the security labels of the subject and the object, and the security policy enforced by the system. The end user cannot set or change the security labels or the policy, as they are determined by a central authority. Discretionary Access Control (DAC) is a type of access control that grants or denies access to an object based on the identity and permissions of the subject, and the discretion of the owner of the object. The subject has total control over the objects that they own, and can grant or revoke access rights to other subjects as they wish. Role Based Access Control (RBAC) is a type of access control that grants or denies access to an object based on the role of the subject, and the permissions assigned to the role. The role is dynamically assigned to the subject based on their job function, and the permissions are determined by the business rules and policies of the organization. Rule based access control is a type of access control that grants or denies access to an object based on the rules or criteria that are defined by a custodian or an administrator. The rules or criteria are dynamically applied to the subject based on their attributes, such as location, time, or device, and the access rights are granted or revoked accordingly. According to the CISSP CBK Official Study Guide1, the Software Development Life Cycle (SDLC) phase that requires maintaining accurate hardware and software inventories is change management. SDLC is a structured process that is used to design, develop, and test good-quality software. SDLC consists of several phases or stages that cover the entire life cycle of the software, from the initial idea or concept to the final deployment or maintenance of the software. SDLC aims to deliver high-quality, maintainable software that meets the user’s requirements and fits within the budget and schedule of the project. Change management is the process of controlling or managing the changes or modifications that are made to the software or the system during the SDLC, by using or applying the Page 105 Question: 90 Answer: D www.certifiedumps.com
  • 106. Questions & Answers PDF Page 106 appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Change management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the risks or the impacts of the changes or modifications that may affect or impair the software or the system, such as the errors, defects, or vulnerabilities of the software or the system. Maintaining accurate hardware and software inventories is a critical part of change management, as it provides or supports a reliable or consistent source or basis to identify or track the hardware and software components or elements that are involved or included in the software or the system, as well as the changes or modifications that are made to the hardware and software components or elements during the SDLC, such as the name, description, version, status, or value of the hardware and software components or elements of the software or the system. Maintaining accurate hardware and software inventories helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by enabling or facilitating the monitoring, evaluation, or improvement of the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the hardware and software components or elements of the software or the system. Systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Systems integration is the process of combining or integrating the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the interfaces, protocols, or standards of the project. Systems integration helps to ensure the functionality or the interoperability of the software or the system, as well as the compatibility or the consistency of the hardware and software components or elements of the software or the system, by ensuring or verifying that the hardware and software components or elements of the software or the system work or operate together or with other systems or networks, as intended or expected by the user or the client of the software or the system. Systems integration may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the systems integration, by controlling or managing the changes or modifications that are made to the hardware and software components or elements of the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of systems integration, which is to combine or integrate the hardware and software components or elements of the software or the system. Risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Risk management is the process of identifying, analyzing, evaluating, and treating the risks or the uncertainties that may affect or impair the software or the system, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Risk management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the impact or the consequence of the risks or the uncertainties that may harm or damage the software www.certifiedumps.com
  • 107. Explanation: Which of the following is considered a secure coding practice? A. Use concurrent access for shared variables and resources B. Use checksums to verify the integrity of libraries C. Use new code for common tasks D. Use dynamic execution functions to pass user supplied data Questions & Answers PDF Page 107 or the system, such as the threats, attacks, or incidents of the software or the system. Risk management may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the risk management, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of risk management, which is to identify, analyze, evaluate, and treat the risks or the uncertainties of the software or the system. Quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Quality assurance is the process of ensuring or verifying the quality or the performance of the software or the system, by using or applying the appropriate methods or mechanisms, such as the standards, criteria, or metrics of the project. Quality assurance helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or detecting the errors, defects, or vulnerabilities of the software or the system, by using or applying the appropriate methods or mechanisms, such as the testing, validation, or verification of the software or the system. Quality assurance may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the quality assurance, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of quality assurance, which is to ensure or verify the quality or the performance of the software or the system. Topic 12, New Questions B Question: 91 Answer: B www.certifiedumps.com
  • 108. As part of the security assessment plan, the security professional has been asked to use a negative testing strategy on a new website. Which of the following actions would be performed? A. Use a web scanner to scan for vulnerabilities within the website. B. Perform a code review to ensure that the database references are properly addressed. C. Establish a secure connection to the web server to validate that only the approved ports are open. D. Enter only numbers in the web form and verify that the website prompts the user to enter a valid input. Explanation: A negative testing strategy is a type of software testing that aims to verify how the system handles invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or usability of the system. One example of a negative testing strategy is to enter only numbers in a web form that expects a text input, such as a name or an email address, and verify that the website prompts the user to enter a valid input. This can help ensure that the website has proper input validation and error handling mechanisms, and that it does not accept or process any malicious or malformed data. A web scanner, a code review, and a secure connection are not examples of a negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system. Questions & Answers PDF Page 108 A secure coding practice is a technique or guideline that aims to prevent or mitigate common software vulnerabilities and ensure the quality, reliability, and security of software applications. One example of a secure coding practice is to use checksums to verify the integrity of libraries. A checksum is a value that is derived from applying a mathematical function or algorithm to a data set, such as a file or a message. A checksum can be used to detect any changes or errors in the data, such as corruption, modification, or tampering. Libraries are collections of precompiled code or functions that can be reused by software applications. Libraries can be static or dynamic, depending on whether they are linked to the application at compile time or run time. Libraries can be vulnerable to attacks such as code injection, code substitution, or code reuse, where an attacker can alter or replace the library code with malicious code. By using checksums to verify the integrity of libraries, a software developer can ensure that the libraries are authentic and have not been compromised or corrupted. Checksums can also help to identify and resolve any errors or inconsistencies in the libraries. Other examples of secure coding practices are to use strong data types, input validation, output encoding, error handling, encryption, and code review. Question: 92 Question: 93 Answer: D www.certifiedumps.com
  • 109. A. Memory review B. Code review C. Message division D. Buffer division Explanation: Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious code or data into the buffer, and altering the execution flow of the program to execute the malicious code or data. Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow, as it involves examining the source code of the program to identify and fix any errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not perform any boundary checking on the buffer, and replace them with safer or more secure alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to the buffer. Code review can also help to enforce and verify the use of secure coding practices and standards, such as input validation, output encoding, error handling, or memory management, that can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message division, and buffer division are not techniques that would minimize the ability of an attacker to exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a technique, but a process of analyzing the memory layout or content of a program, such as the stack, the heap, or the registers, to understand or debug its behavior or performance. Memory review may help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or fixed-size segments or blocks, such as in cryptography or networking. Message division may help to improve the security or efficiency of the message transmission or processing, but it does not prevent or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow. Which of the following is the GREATEST benefit of implementing a Role Based Access Control (RBAC) Questions & Answers PDF Page 109 Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow? Question: 94 Answer: B www.certifiedumps.com
  • 110. Questions & Answers PDF system? A. Integration using Lightweight Directory Access Protocol (LDAP) B. Form-based user registration process C. Integration with the organizations Human Resources (HR) system D. A considerably simpler provisioning process Explanation: The greatest benefit of implementing a Role Based Access Control (RBAC) system is a considerably simpler provisioning process. Provisioning is the process of creating, modifying, or deleting the user accounts and access rights on a system or a network. Provisioning can be a complex and tedious task, especially in large or dynamic organizations that have many users, systems, and resources. RBAC is a type of access control model that assigns permissions to users based on their roles or functions within the organization, rather than on their individual identities or attributes. RBAC can simplify the provisioning process by reducing the administrative overhead and ensuring the consistency and accuracy of the user accounts and access rights. RBAC can also provide some benefits for security, such as enforcing the principle of least privilege, facilitating the separation of duties, and supporting the audit and compliance activities. Integration using Lightweight Directory Access Protocol (LDAP), form-based user registration process, and integration with the organizations Human Resources (HR) system are not the greatest benefits of implementing a RBAC system, although they may be related or useful features. Integration using LDAP is a technique that uses a standard protocol to communicate and exchange information with a directory service, such as Active Directory or OpenLDAP. LDAP can provide some benefits for access control, such as centralizing and standardizing the user accounts and access rights, supporting the authentication and authorization mechanisms, and enabling the interoperability and scalability of the systems or the network. However, integration using LDAP is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as discretionary access control (DAC) or mandatory access control (MAC). Form-based user registration process is a technique that uses a web-based form to collect and validate the user information and preferences, such as name, email, password, or role. Form-based user registration process can provide some benefits for access control, such as simplifying and automating the user account creation, enhancing the user experience and satisfaction, and supporting the self-service and delegation capabilities. However, form-based user registration process is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as DAC or MAC. Integration with the organizations HR system is a technique that uses a software application or a service to synchronize and update the user accounts and access rights with the HR data, such as employee records, job titles, or organizational units. Integration with the organizations HR system can provide some benefits for access control, such as streamlining and automating the provisioning process, improving the accuracy and timeliness of the user accounts and access rights, and supporting the identity lifecycle management activities. Page 110 Answer: D www.certifiedumps.com
  • 111. Which of the following combinations would MOST negatively affect availability? A. Denial of Service (DoS) attacks and outdated hardware B. Unauthorized transactions and outdated hardware C. Fire and accidental changes to data D. Unauthorized transactions and denial of service attacks Explanation: The combination that would most negatively affect availability is denial of service (DoS) attacks and outdated hardware. Availability is the property or the condition of a system or a network to be accessible and usable by the authorized users or customers, whenever and wherever they need it. Availability can be measured by various metrics, such as uptime, downtime, response time, or reliability. Availability can be affected by various factors, such as hardware, software, network, human, or environmental factors. Denial of service (DoS) attacks and outdated hardware are two factors that can negatively affect availability, as they can cause or contribute to the following consequences: Denial of service (DoS) attacks are malicious attacks that aim to disrupt or degrade the availability of a system or a network, by overwhelming or exhausting its resources, such as bandwidth, memory, or processing power, with a large number or a high frequency of requests or packets. Denial of service (DoS) attacks can prevent or delay the legitimate users or customers from accessing or using the system or the network, and they can cause errors, failures, or crashes to the system or the network. Outdated hardware are hardware components that are old, obsolete, or unsupported, and that do not meet the current or the expected requirements or standards of the system or the network, such as performance, functionality, or security. Outdated hardware can reduce or limit the availability of the system or the network, as they can cause malfunctions, breakdowns, or incompatibilities to the system or the network, and they can be difficult or costly to maintain, repair, or replace. The combination of denial of service (DoS) attacks and outdated hardware would most negatively affect availability, as they can have a synergistic or a cumulative effect on the system or the network, and they can exacerbate or amplify each other’s impact. For example, denial of service (DoS) attacks can exploit or target the vulnerabilities or the weaknesses of the outdated hardware, and they can Questions & Answers PDF Page 111 However, integration with the organizations HR system is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as DAC or MAC. Question: 95 Answer: A www.certifiedumps.com
  • 112. Questions & Answers PDF cause more damage or disruption to the system or the network. Outdated hardware can increase or prolong the susceptibility or the recovery of the system or the network to the denial of service (DoS) attacks, and they can reduce or hinder the resilience or the mitigation of the system or the network to the denial of service (DoS) attacks. Unauthorized transactions and outdated hardware, fire and accidental changes to data, and unauthorized transactions and denial of service attacks are not the combinations that would most negatively affect availability, although they may be related or possible combinations. Unauthorized transactions and outdated hardware are two factors that can negatively affect the confidentiality and the integrity of the data, rather than the availability of the system or the network, as they can cause or contribute to the following consequences: Unauthorized transactions are malicious or improper activities that involve accessing, modifying, or transferring the data on a system or a network, without the permission or the consent of the owner or the custodian of the data, such as theft, fraud, or sabotage. Unauthorized transactions can compromise or damage the confidentiality and the integrity of the data, as they can expose or disclose the data to unauthorized parties, or they can alter or destroy the data. Outdated hardware are hardware components that are old, obsolete, or unsupported, and that do not meet the current or the expected requirements or standards of the system or the network, such as performance, functionality, or security. Outdated hardware can compromise or damage the confidentiality and the integrity of the data, as they can be vulnerable or susceptible to attacks or errors, or they can be incompatible or inconsistent with the data. Fire and accidental changes to data are two factors that can negatively affect the availability and the integrity of the data, rather than the availability of the system or the network, as they can cause or contribute to the following consequences: Fire is a physical or an environmental hazard that involves the combustion or the burning of a material or a substance, such as wood, paper, or plastic, and that produces heat, light, or smoke. Fire can damage or destroy the availability and the integrity of the data, as it can consume or melt the physical media or devices that store the data, such as hard disks, tapes, or CDs, or it can corrupt or erase the data on the media or devices. Accidental changes to data are human or operational errors that involve modifying or altering the data on a system or a network, without the intention or the awareness of the user or the operator, such as typos, misconfigurations, or overwrites. Accidental changes to data can damage or destroy the availability and the integrity of the data, as they can make the data inaccessible or unusable, or they can make the data inaccurate or unreliable. Unauthorized transactions and denial of service attacks are two factors that can negatively affect the confidentiality and the availability of the system or the network, rather than the availability of the system or the network, as they can cause or contribute to the following consequences: Unauthorized transactions are malicious or improper activities that involve accessing, modifying, or transferring the data on a system or a network, without the permission or the consent of the owner Page 112 www.certifiedumps.com
  • 113. Questions & Answers PDF Which of the following is a characteristic of an internal audit? A. An internal audit is typically shorter in duration than an external audit. B. The internal audit schedule is published to the organization well in advance. C. The internal auditor reports to the Information Technology (IT) department D. Management is responsible for reading and acting upon the internal audit results or the custodian of the data, such as theft, fraud, or sabotage. Unauthorized transactions can compromise or damage the confidentiality and the availability of the system or the network, as they can expose or disclose the data to unauthorized parties, or they can consume or divert the resources of the system or the network. Denial of service (DoS) attacks are malicious attacks that aim to disrupt or degrade the availability of a system or a network, by overwhelming or exhausting its resources, such as bandwidth, memory, or processing power, with a large number or a high frequency of requests or packets. Denial of service (DoS) attacks can compromise or damage the confidentiality and the availability of the system or the network, as they can prevent or delay the legitimate users or customers from accessing or using the system or the network, and they can cause errors, failures, or crashes to the system or the network. Explanation: A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as: Planning, which is the preparation or the design of the internal audit, by the internal auditor or the audit team, who are responsible for conducting or performing the internal audit. Planning includes defining the objectives, scope, criteria, and methodology of the internal audit, as well as identifying and analyzing the risks and the stakeholders of the internal audit. Execution, which is the implementation or the performance of the internal audit, by the internal auditor or the audit team, who are responsible for collecting and evaluating the evidence or the data related to the internal audit, using various tools and techniques, such as interviews, observations, tests, or surveys. Page 113 Question: 96 Answer: D www.certifiedumps.com
  • 114. Questions & Answers PDF Proven application security principles include which of the following? A. Minimizing attack surface area B. Hardening the network perimeter C. Accepting infrastructure security controls D. Developing independent modules Reporting, which is the communication or the presentation of the internal audit results, by the internal auditor or the audit team, who are responsible for preparing and delivering the internal audit report, which contains the findings, conclusions, and recommendations of the internal audit, to the management or the audit committee, who are the primary users or recipients of the internal audit report. Follow-up, which is the verification or the validation of the internal audit results, by the management or the audit committee, who are responsible for reading and acting upon the internal audit report, as well as by the internal auditor or the audit team, who are responsible for monitoring and reviewing the actions taken by the management or the audit committee, based on the internal audit report. Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team. Page 114 Question: 97 www.certifiedumps.com
  • 115. Questions & Answers PDF Explanation: Minimizing attack surface area is a proven application security principle that aims to reduce the exposure or the vulnerability of an application to potential attacks, by limiting or eliminating the unnecessary or unused features, functions, or services of the application, as well as the access or the interaction of the application with other applications, systems, or networks. Minimizing attack surface area can provide some benefits for security, such as enhancing the performance and the functionality of the application, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Hardening the network perimeter, accepting infrastructure security controls, and developing independent modules are not proven application security principles, although they may be related or useful concepts or techniques. Hardening the network perimeter is a network security concept or technique that aims to protect the network from external or unauthorized attacks, by strengthening or enhancing the security controls or mechanisms at the boundary or the edge of the network, such as firewalls, routers, or gateways. Hardening the network perimeter can provide some benefits for security, such as enhancing the performance and the functionality of the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, hardening the network perimeter is not an application security principle, as it is not specific or applicable to the application layer, and it does not address the internal or the inherent security of the application. Accepting infrastructure security controls is a risk management concept or technique that involves accepting the residual risk of an application after applying the security controls or mechanisms provided by the underlying infrastructure, such as the hardware, the software, the network, or the cloud. Accepting infrastructure security controls can provide some benefits for security, such as reducing the cost and the complexity of the security implementation, leveraging the expertise and the resources of the infrastructure providers, and supporting the audit and the compliance activities. However, accepting infrastructure security controls is not an application security principle, as it is not a proactive or a preventive measure to enhance the security of the application, and it may introduce or increase the dependency or the vulnerability of the application on the infrastructure. Developing independent modules is a software engineering concept or technique that involves designing or creating the application as a collection or a composition of discrete or separate components or units, each with a specific function or purpose, and each with a well-defined interface or contract. Developing independent modules can provide some benefits for security, such as enhancing the usability and the maintainability of the application, preventing or isolating some types of errors or bugs, and supporting the testing and the verification activities. However, developing independent modules is not an application security principle, as it is not a direct or a deliberate measure to improve the security of the application, and it may not address or prevent some types of attacks or vulnerabilities that affect the application as a whole or the interaction between the modules. Page 115 Question: 98 Answer: A www.certifiedumps.com
  • 116. Explanation: When developing a business case for updating a security program, the security program owner must identify relevant metrics that can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. A business case is a document or a presentation that provides the rationale or the argument for initiating or continuing a project or a program, such as a security program, by analyzing and comparing the costs and the benefits, the risks and the opportunities, and the alternatives and the recommendations of the project or the program. A business case can provide some benefits for security, such as enhancing the visibility and the accountability of the security program, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A business case can involve various elements and steps, such as: Problem statement, which is the description or the definition of the problem or the issue that the project or the program aims to solve or address, such as a security gap, a security threat, or a security requirement. Solution proposal, which is the explanation or the demonstration of the solution or the approach that the project or the program offers or adopts to solve or address the problem or the issue, such as a security tool, a security process, or a security standard. Cost-benefit analysis, which is the calculation or the estimation of the costs and the benefits of the project or the program, both in quantitative and qualitative terms, such as the financial, operational, or strategic costs and benefits, and the comparison or the evaluation of the costs and the benefits, to determine the feasibility and the viability of the project or the program. Risk assessment, which is the identification and the analysis of the risks or the uncertainties that may affect the project or the program, both in positive and negative terms, such as the threats, vulnerabilities, or opportunities, and the estimation or the evaluation of the likelihood and the impact of the risks, to determine the severity and the priority of the risks, and to develop or implement the risk mitigation or the risk management strategies or actions. Questions & Answers PDF Page 116 When developing a business case for updating a security program, the security program owner MUST do which of the following? A. Identify relevant metrics B. Prepare performance test reports C. Obtain resources for the security program D. Interview executive management Answer: A www.certifiedumps.com
  • 117. Questions & Answers PDF Alternative analysis, which is the identification and the analysis of the alternative or the comparable solutions or approaches that may solve or address the problem or the issue, other than the proposed solution or approach, such as the existing or the available solutions or approaches, or the do-nothing or the status-quo option, and the comparison or the evaluation of the alternative solutions or approaches, to determine the advantages and the disadvantages, the strengths and the weaknesses, and the pros and the cons of each alternative solution or approach. Recommendation, which is the suggestion or the endorsement of the best or the preferred solution or approach that can solve or address the problem or the issue, based on the results or the outcomes of the previous elements or steps, such as the cost-benefit analysis, the risk assessment, or the alternative analysis, and the justification or the support of the recommendation, by providing the evidence or the data that can validate or verify the recommendation. Identifying relevant metrics is a key element or step of developing a business case for updating a security program, as it can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. Metrics are measures or indicators that can quantify or qualify the attributes or the outcomes of a process or an activity, such as the security program, and that can provide the information or the feedback that can facilitate the decision making or the improvement of the process or the activity. Metrics can provide some benefits for security, such as enhancing the accuracy and the reliability of the security program, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. Identifying relevant metrics can involve various tasks or duties, such as: Defining and documenting the objectives, scope, criteria, and methodology of the metrics, and ensuring that they are consistent and aligned with the business case and the security program. Selecting and collecting the data or the evidence that are related to the metrics, using various tools and techniques, such as surveys, interviews, tests, or audits. Analyzing and interpreting the data or the evidence that are related to the metrics, using various methods and models, such as statistical, mathematical, or graphical methods or models. Reporting and communicating the results or the findings of the metrics, using various formats and channels, such as reports, dashboards, or presentations. Preparing performance test reports, obtaining resources for the security program, and interviewing executive management are not the tasks or duties that the security program owner must do when developing a business case for updating a security program, although they may be related or possible tasks or duties. Preparing performance test reports is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to verify or validate the functionality and the quality of the security program, according to the standards and the criteria of the security program, and to detect and report any errors, bugs, or vulnerabilities in the security program. Obtaining resources for the security program is a task or a technique that can Page 117 www.certifiedumps.com
  • 118. Explanation: Transport Layer Security (TLS) provides peer identity authentication as one of its capabilities for a remote access server. TLS is a cryptographic protocol that provides secure communication over a network. It operates at the transport layer of the OSI model, between the application layer and the network layer. TLS uses asymmetric encryption to establish a secure session key between the client and the server, and then uses symmetric encryption to encrypt the data exchanged during the session. TLS also uses digital certificates to verify the identity of the client and the server, and to prevent impersonation or spoofing attacks. This process is known as peer identity authentication, and it ensures that the client and the server are communicating with the intended parties and not with an attacker. TLS also provides other capabilities for a remote access server, such as data integrity, confidentiality, and forward secrecy. Reference: Enable TLS 1.2 on servers - Configuration Manager; How to Secure Remote Desktop Connection with TLS 1.2. - Microsoft Q&A; Enable remote access from intranet with TLS/SSL certificate (Advanced … Transport Layer Security (TLS) provides which of the following capabilities for a remote access server? A. Transport layer handshake compression B. Application layer negotiation C. Peer identity authentication D. Digital certificate revocation A chemical plan wants to upgrade the Industrial Control System (ICS) to transmit data using Ethernet instead of RS422. The project manager wants to simplify administration and maintenance by utilizing the office network infrastructure and staff to implement this upgrade. Questions & Answers PDF Page 118 be used by the security program owner, the security program sponsor, or the security program manager, to acquire or allocate the necessary or the sufficient resources for the security program, such as the financial, human, or technical resources, and to manage or optimize the use or the distribution of the resources for the security program. Interviewing executive management is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to collect and analyze the information and the feedback about the security program, from the executive management, who are the primary users or recipients of the security program, and who have the authority and the accountability to implement or execute the security program. Question: 99 Question: 100 Answer: C www.certifiedumps.com
  • 119. Questions & Answers PDF Which of the following is the GREATEST impact on security for the network? A. The network administrators have no knowledge of ICS B. The ICS is now accessible from the office network C. The ICS does not support the office password policy D. RS422 is more reliable than Ethernet Explanation: The greatest impact on security for the network is that the ICS is now accessible from the office network. This means that the ICS is exposed to more potential threats and vulnerabilities from the internet and the office network, such as malware, unauthorized access, data leakage, or denial-of- service attacks. The ICS may also have different security requirements and standards than the office network, such as availability, reliability, and safety. Therefore, connecting the ICS to the office network increases the risk of compromising the confidentiality, integrity, and availability of the ICS and the critical infrastructure it controls. The other options are not as significant as the increased attack surface and complexity of the network. Reference: Guide to Industrial Control Systems (ICS) Security | NIST, page 2-1; Industrial Control Systems | Cybersecurity and Infrastructure Security Agency, page 1. What does a Synchronous (SYN) flood attack do? A. Forces Transmission Control Protocol /Internet Protocol (TCP/IP) connections into a reset state B. Establishes many new Transmission Control Protocol / Internet Protocol (TCP/IP) connections C. Empties the queue of pending Transmission Control Protocol /Internet Protocol (TCP/IP) requests D. Exceeds the limits for new Transmission Control Protocol /Internet Protocol (TCP/IP) connections Explanation: A SYN flood attack does exceed the limits for new TCP/IP connections. A SYN flood attack is a type of denial-of-service attack that sends a large number of SYN packets to a server, without completing the TCP three-way handshake. The server allocates resources for each SYN packet and waits for the final ACK packet, which never arrives. This consumes the server’s memory and processing power, and prevents it from accepting new legitimate connections. The other options are not accurate descriptions of what a SYN flood attack does. Reference: SYN flood - Wikipedia; SYN flood DDoS attack | Cloudflare. Page 119 Question: 101 Answer: B Answer: D www.certifiedumps.com
  • 120. Questions & Answers PDF Access to which of the following is required to validate web session management? A. Log timestamp B. Live session traffic C. Session state variables D. Test scripts Which of the following is the BEST metric to obtain when gaining support for an Identify and Access Management (IAM) solution? A. Application connection successes resulting in data leakage B. Administrative costs for restoring systems after connection failure C. Employee system timeouts from implementing wrong limits Explanation: Access to session state variables is required to validate web session management. Web session management is the process of maintaining the state and information of a user across multiple requests and interactions with a web application. Web session management relies on session state variables, which are data elements that store the user’s preferences, settings, authentication status, and other relevant information for the duration of the session. Session state variables can be stored on the client side (such as cookies or local storage) or on the server side (such as databases or files). To validate web session management, it is necessary to access the session state variables and verify that they are properly generated, maintained, and destroyed by the web application. This can help to ensure the security, functionality, and performance of the web application and the user experience. The other options are not required to validate web session management. Log timestamp is a data element that records the date and time of a user’s activity or event on the web application, but it does not store the user’s state or information. Live session traffic is the network data that is exchanged between the user and the web application during the session, but it does not reflect the session state variables that are stored on the client or the server side. Test scripts are code segments that are used to automate the testing of the web application’s features and functions, but they do not access the session state variables directly. Reference: Session Management - OWASP Cheat Sheet Series; Session Management: An Overview | SecureCoding.com; Session Management in HTTP - GeeksforGeeks. Page 120 Question: 102 Question: 103 Answer: C www.certifiedumps.com
  • 121. Questions & Answers PDF D. Help desk costs required to support password reset requests What is the second step in the identity and access provisioning lifecycle? A. Provisioning B. Review C. Approval D. Revocation Explanation: The identity and access provisioning lifecycle is the process of managing the creation, modification, and termination of user accounts and access rights in an organization. The second step in this lifecycle is approval, which means that the identity and access requests must be authorized by the appropriate managers or administrators before they are implemented. Approval ensures that the principle of least privilege is followed and that only authorized users have access to the required resources. Explanation: Identify and Access Management (IAM) is the process of managing the identities and access rights of users and devices in an organization. IAM solutions can provide various benefits, such as improving security, compliance, productivity, and user experience. However, implementing an IAM solution may also require significant investment and resources, and therefore, it is important to obtain support from the stakeholders and decision-makers. One of the best metrics to obtain when gaining support for an IAM solution is the help desk costs required to support password reset requests. This metric can demonstrate the following advantages of an IAM solution: Reducing the workload and expenses of the help desk staff, who often spend a large amount of time and money on handling password reset requests from users who forget or lose their passwords. Enhancing the security and compliance of the organization, by reducing the risks of unauthorized access, identity theft, phishing, and credential compromise, which can result from weak or shared passwords, or passwords that are not changed frequently or securely. Improving the productivity and user experience of the users, by enabling them to reset their own passwords quickly and easily, without having to contact the help desk or wait for a response. This can also reduce the downtime and frustration of the users, and increase their satisfaction and loyalty. Page 121 Question: 104 Answer: C Answer: D www.certifiedumps.com
  • 122. Questions & Answers PDF Which of the following would BEST support effective testing of patch compatibility when patches are applied to an organization’s systems? A. Standardized configurations for devices B. Standardized patch testing equipment C. Automated system patching D. Management support for patching An international medical organization with headquarters in the United States (US) and branches in France wants to test a drug in both countries. What is the organization allowed to do with the test subject’s data? A. Aggregate it into one database in the US B. Process it in the US, but store the information in France C. Share it with a third party D. Anonymize it and process it in the US Explanation: Anonymizing the test subject’s data means removing or masking any personally identifiable information (PII) that could be used to identify or trace the individual. This can help to protect the privacy and confidentiality of the test subjects, as well as comply with the data protection laws and regulations of both countries. Processing the anonymized data in the US can also help to reduce the Explanation: Standardized configurations for devices can help to reduce the complexity and variability of the systems that need to be patched, and thus facilitate the testing of patch compatibility. Standardized configurations can also help to ensure that the patches are applied consistently and correctly across the organization. Standardized patch testing equipment, automated system patching, and management support for patching are also important factors for effective patch management, but they are not directly related to testing patch compatibility. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 605; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 386. Page 122 Question: 105 Question: 106 Answer: A Answer: D www.certifiedumps.com
  • 123. It is MOST important to perform which of the following to minimize potential impact when implementing a new vulnerability scanning tool in a production environment? A. Negotiate schedule with the Information Technology (IT) operation’s team B. Log vulnerability summary reports to a secured server C. Enable scanning during off-peak hours D. Establish access for Information Technology (IT) management Due to system constraints, a group of system administrators must share a high-level access set of credentials. Which of the following would be MOST appropriate to implement? A. Increased console lockout times for failed logon attempts B. Reduce the group in size C. A credential check-out process for a per-use basis D. Full logging on affected systems Explanation: It is most important to perform a schedule negotiation with the IT operation’s team to minimize the potential impact when implementing a new vulnerability scanning tool in a production environment. This is because a vulnerability scan can cause network congestion, performance degradation, or system instability, which can affect the availability and functionality of the production systems. Therefore, it is essential to coordinate with the IT operation’s team to determine the best time and frequency for the scan, as well as the scope and intensity of the scan. Logging vulnerability summary reports, enabling scanning during off-peak hours, and establishing access for IT management are also good practices for vulnerability scanning, but they are not as important as negotiating the schedule with the IT operation’s team. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 858; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 794. Questions & Answers PDF Page 123 costs and risks of transferring the data across borders. Aggregating the data into one database in the US, processing it in the US but storing it in France, or sharing it with a third party could all pose potential privacy and security risks, as well as legal and ethical issues, for the organization and the test subjects. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 67; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 62. Question: 107 Question: 108 Answer: A www.certifiedumps.com
  • 124. Questions & Answers PDF Which of the following is the MOST efficient mechanism to account for all staff during a speedy nonemergency evacuation from a large security facility? A. Large mantrap where groups of individuals leaving are identified using facial recognition technology B. Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each exitdoor C. Emergency exits with push bars with coordinates at each exit checking off the individual against a predefined list D. Card-activated turnstile where individuals are validated upon exit Explanation: The most appropriate measure to implement when a group of system administrators must share a high-level access set of credentials due to system constraints is a credential check-out process for a per-use basis. This means that the system administrators must request and obtain the credentials from a secure source each time they need to use them, and return them after they finish their tasks. This can help to reduce the risk of unauthorized access, misuse, or compromise of the credentials, as well as to enforce accountability and traceability of the system administrators’ actions. Increasing console lockout times, reducing the group size, and enabling full logging are not as effective as a credential check-out process, as they do not address the root cause of the problem, which is the sharing of the credentials. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 633; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 412. Explanation: Using RFID sensors worn by employees is the most efficient mechanism for accounting for staff during a speedy non-emergency evacuation in a large facility. RFID systems allow automatic, real-time tracking without manual intervention. The sensors can quickly and accurately identify when employees pass through exit points, ensuring that everyone is accounted for without delays. Page 124 Question: 109 Question: 110 Answer: C Answer: B www.certifiedumps.com
  • 125. Questions & Answers PDF Which of the following is the MOST challenging issue in apprehending cyber criminals? A. They often use sophisticated method to commit a crime. B. It is often hard to collect and maintain integrity of digital evidence. C. The crime is often committed from a different jurisdiction. D. There is often no physical evidence involved. Explanation: Code quality, security, and origin are important criteria when designing procedures and acceptance criteria for acquired software. Code quality refers to the degree to which the software meets the functional and nonfunctional requirements, as well as the standards and best practices for coding. Security refers to the degree to which the software protects the confidentiality, integrity, and availability of the data and the system. Origin refers to the source and ownership of the software, as well as the licensing and warranty terms. Architecture, hardware, and firmware are not criteria for Which of the following are important criteria when designing procedures and acceptance criteria for acquired software? A. Code quality, security, and origin B. Architecture, hardware, and firmware C. Data quality, provenance, and scaling D. Distributed, agile, and bench testing Explanation: The most challenging issue in apprehending cyber criminals is that the crime is often committed from a different jurisdiction. This means that the cyber criminals may operate from a different country or region than the victim or the target, and thus may be subject to different laws, regulations, and enforcement agencies. This can create difficulties and delays in identifying, locating, and prosecuting the cyber criminals, as well as in obtaining and preserving the digital evidence. The other issues, such as the sophistication of the methods, the integrity of the evidence, and the lack of physical evidence, are also challenges in apprehending cyber criminals, but they are not as significant as the jurisdiction issue. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 475; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 544. Page 125 Question: 111 Answer: C Answer: A www.certifiedumps.com
  • 126. Questions & Answers PDF What is the PRIMARY role of a scrum master in agile development? A. To choose the primary development language Explanation: The first step when purchasing Commercial Off-The-Shelf (COTS) software is to establish policies and procedures on system and services acquisition. This involves defining the objectives, scope, and criteria for acquiring the software, as well as the roles and responsibilities of the stakeholders involved in the acquisition process. The policies and procedures should also address the legal, contractual, and regulatory aspects of the acquisition, such as the terms and conditions, the service level agreements, and the compliance requirements. Undergoing a security assessment, establishing a risk management strategy, and hardening the hosting server are not the first steps when purchasing COTS software, but they may be part of the subsequent steps, such as the evaluation, selection, and implementation of the software. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 64; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 56. acquired software, but for the system that hosts the software. Data quality, provenance, and scaling are not criteria for acquired software, but for the data that the software processes. Distributed, agile, and bench testing are not criteria for acquired software, but for the software development and testing methods. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 947; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 869. Which of the following steps should be performed FIRST when purchasing Commercial Off-The-Shelf (COTS) software? A. undergo a security assessment as part of authorization process B. establish a risk management strategy C. harden the hosting server, and perform hosting and application vulnerability scans D. establish policies and procedures on system and services acquisition Page 126 Question: 112 Question: 113 Answer: D www.certifiedumps.com
  • 127. Questions & Answers PDF B. To choose the integrated development environment C. To match the software requirements to the delivery plan D. To project manage the software delivery Which of the following techniques is known to be effective in spotting resource exhaustion problems, especially with resources such as processes, memory, and connections? A. Automated dynamic analysis B. Automated static analysis C. Manual code review D. Fuzzing Explanation: The primary role of a scrum master in agile development is to match the software requirements to the delivery plan. A scrum master is a facilitator who helps the development team and the product owner to collaborate and deliver the software product incrementally and iteratively, following the agile principles and practices. A scrum master is responsible for ensuring that the team follows the scrum framework, which includes defining the product backlog, planning the sprints, conducting the daily stand-ups, reviewing the deliverables, and reflecting on the process. A scrum master is not responsible for choosing the primary development language, the integrated development environment, or project managing the software delivery, although they may provide guidance and support to the team on these aspects. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 933; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 855. Explanation: Fuzzing is a technique that is known to be effective in spotting resource exhaustion problems, especially with resources such as processes, memory, and connections. Fuzzing is a type of testing that involves sending random, malformed, or unexpected input to the system or application, and observing its behavior and response. Fuzzing can help to identify resource exhaustion problems, such as memory leaks, buffer overflows, or connection timeouts, which can affect the availability, functionality, or security of the system or application. Fuzzing can also help to discover other types of vulnerabilities, such as logic errors, input validation errors, or exception handling errors. Automated dynamic analysis, automated static analysis, and manual code review are not techniques that are Page 127 Question: 114 Answer: C Answer: D www.certifiedumps.com
  • 128. Questions & Answers PDF Which one of the following is an advantage of an effective release control strategy form a configuration control standpoint? A. Ensures that a trace for all deliverables is maintained and auditable B. Enforces backward compatibility between releases C. Ensures that there is no loss of functionality between releases D. Allows for future enhancements to existing features Explanation: An advantage of an effective release control strategy from a configuration control standpoint is that it ensures that a trace for all deliverables is maintained and auditable. Release control is a process that manages the distribution and installation of software releases into the operational environment. Configuration control is a process that maintains the integrity and consistency of the software configuration items throughout the software development life cycle. An effective release control strategy can help to ensure that a trace for all deliverables is maintained and auditable, which means that the origin, history, and status of each software release can be tracked and verified. This can help to prevent unauthorized or incompatible changes, as well as to facilitate troubleshooting and recovery. Enforcing backward compatibility, ensuring no loss of functionality, and allowing for future enhancements are not advantages of release control from a configuration control standpoint, but from a functionality or performance standpoint. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 969; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 895. known to be effective in spotting resource exhaustion problems, although they may be used for other types of testing or analysis. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 1001; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 923. Page 128 The design review for an application has been completed and is ready for release. What technique should an organization use to assure application integrity? A. Application authentication B. Input validation C. Digital signing Question: 115 Question: 116 Answer: A www.certifiedumps.com
  • 129. D. Device encryption Questions & Answers PDF What is the BEST location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access? A. In a dedicated Demilitarized Zone (DMZ) B. In its own separate Virtual Local Area Network (VLAN) C. At the Internet Service Provider (ISP) D. Outside the external firewall Explanation: The best location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access is in a dedicated Demilitarized Zone (DMZ). A DMZ is a network segment that is located between the internal network and the external network, such as the internet. A DMZ is used to host the services or devices that need to be accessed by both the internal and external users, such as web servers, email servers, or VPN devices. A VPN device is a device that enables the establishment of a VPN, which is a secure and encrypted connection between two networks or endpoints over a public network, such as the internet. Placing the VPN devices in a dedicated DMZ can help to improve the security and performance of the remote access, as well as to isolate the VPN devices from the internal network and the external network. Placing the VPN devices in its own separate VLAN, at the ISP, or outside the external firewall are not the best locations, as they may expose the VPN devices to more risks, reduce the control over the VPN Explanation: The technique that an organization should use to assure application integrity is digital signing. Digital signing is a technique that uses cryptography to generate a digital signature for a message or a document, such as an application. The digital signature is a value that is derived from the message and the sender’s private key, and it can be verified by the receiver using the sender’s public key. Digital signing can help to assure application integrity, which means that the application has not been altered or tampered with during the transmission or storage. Digital signing can also help to assure application authenticity, which means that the application originates from the legitimate source. Application authentication, input validation, and device encryption are not techniques that can assure application integrity, but they can help to assure application security, usability, or confidentiality, respectively. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 607; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 388. Page 129 Question: 117 Answer: C Answer: A www.certifiedumps.com
  • 130. What Is the FIRST step in establishing an information security program? A. Establish an information security policy. B. Identify factors affecting information security. C. Establish baseline security controls. D. Identify critical security infrastructure. Which of the following is MOST effective in detecting information hiding in Transmission Control Protocol/internet Protocol (TCP/IP) traffic? A. Stateful inspection firewall B. Application-level firewall C. Content-filtering proxy D. Packet-filter firewall Explanation: The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14. Questions & Answers PDF Page 130 devices, or create a single point of failure for the remote access. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 509. Question: 118 Question: 119 Answer: B Answer: A www.certifiedumps.com
  • 131. Which of the following is the BEST way to reduce the impact of an externally sourced flood attack? A. Have the service provider block the soiree address. B. Have the soiree service provider block the address. C. Block the source address at the firewall. D. Block all inbound traffic until the flood ends. Explanation: The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525. Questions & Answers PDF Page 131 Explanation: An application-level firewall is the most effective in detecting information hiding in TCP/IP traffic. Information hiding is a technique that conceals data or messages within other data or messages, such as using steganography, covert channels, or encryption. An application-level firewall is a type of firewall that operates at the application layer of the OSI model, and inspects the content and context of the network packets, such as the headers, payloads, or protocols. An application-level firewall can help to detect information hiding in TCP/IP traffic, as it can analyze the data for any anomalies, inconsistencies, or violations of the expected format or behavior. A stateful inspection firewall, a content-filtering proxy, and a packet-filter firewall are not as effective in detecting information hiding in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the state, content, or header of the network packets, respectively. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511. Question: 120 Answer: A Topic 13, NEW Questions C www.certifiedumps.com
  • 132. Questions & Answers PDF Which of the following is used to support the of defense in depth during development phase of a software product? A. Security auditing B. Polyinstantiation C. Maintenance D. Known vulnerability list Explanation: Polyinstantiation is a technique that creates multiple versions of the same data with different security labels. This can prevent unauthorized users from inferring sensitive information from aggregated data or queries. Polyinstantiation can support the principle of defense in depth during the development phase of a software product by providing an additional layer of protection for data confidentiality and integrity. Reference: 1: CISSP CBK, 4th edition, page 352 2: CISSP Official (ISC)2 Practice Tests, 3rd edition, page 123 Company A is evaluating new software to replace an in-house developed application. During the acquisition process. Company A specified the security retirement, as well as the functional requirements. Company B responded to the acquisition request with their flagship product that runs on an Operating System (OS) that Company A has never used nor evaluated. The flagship product meets all security -and functional requirements as defined by Company A. Based upon Company B's response, what step should Company A take? A. Move ahead with the acpjisition process, and purchase the flagship software B. Conduct a security review of the OS C. Perform functionality testing D. Enter into contract negotiations ensuring Service Level Agreements (SLA) are established to include security patching Page 132 Question: 121 Question: 122 Answer: B Answer: B www.certifiedumps.com
  • 133. Questions & Answers PDF What is maintained by using write blocking devices when forensic evidence is examined? A. Inventory B. lntegrity C. Confidentiality D. Availability DRAG DROP Match the level of evaluation to the correct common criteria (CC) assurance level. Drag each level of evaluation on the left to is corresponding CC assurance level on the right Explanation: Write blocking devices are used to prevent any modification of the forensic evidence when it is examined. This preserves the integrity of the evidence and ensures its admissibility in court. Write blocking devices do not affect the inventory, confidentiality, or availability of the evidence. Reference: 1, p. 1030; [4], p. 17 Explanation: Company A should conduct a security review of the OS that Company B’s flagship product runs on, since it is unfamiliar to them and may introduce new risks or vulnerabilities. The security review should evaluate the OS’s security features, patches, updates, configuration, and compatibility with Company A’s environment. Moving ahead with the acquisition process without reviewing the OS, performing functionality testing, or entering into contract negotiations are premature steps that may compromise Company A’s security posture. Reference: 1, p. 1019; 3, p. 15 Page 133 Question: 123 Question: 124 Answer: B www.certifiedumps.com
  • 134. Questions & Answers PDF Explanation: The correct matches are as follows: Structurally tested -> Assurance Level 2 Methodically tested and checked -> Assurance Level 3 Methodically designed, tested, and reviewed -> Assurance Level 4 Functionally tested -> Assurance Level 1 Semiformally verified design and tested -> Assurance Level 6 Formally verified design and tested -> Assurance Level 7 Semiformally designed and tested -> Assurance Level 5 The Common Criteria (CC) is an international standard for evaluating the security and assurance of information technology products and systems. The CC defines seven levels of evaluation assurance levels (EALs), ranging from EAL1 (the lowest) to EAL7 (the highest), that indicate the degree of confidence and rigor in the evaluation process. Each EAL consists of a set of assurance components that specify the requirements for the security functions, development, guidance, testing, vulnerability analysis, and life cycle support of the product or system. The CC also defines several levels of evaluation that correspond to the EALs, based on the methods and techniques used to Page 134 Answer: www.certifiedumps.com
  • 135. Questions & Answers PDF Which is the second phase of public key Infrastructure (pk1) key/certificate life-cycle management? A. Issued Phase B. Cancellation Phase C. Implementation phase D. Initialization Phase evaluate the product or system. The levels of evaluation are: Functionally tested: The product or system is tested against its functional specification and provides a basic level of assurance. This level corresponds to EAL1. Structurally tested: The product or system is tested against its functional and high-level design specifications and provides a low level of assurance. This level corresponds to EAL2. Methodically tested and checked: The product or system is tested against its functional, high-level, and low-level design specifications and provides a moderate level of assurance. This level corresponds to EAL3. Methodically designed, tested, and reviewed: The product or system is tested against its functional, high-level, low-level, and implementation specifications and provides a moderate to high level of assurance. This level corresponds to EAL4. Semiformally designed and tested: The product or system is tested against its functional, high-level, low-level, and implementation specifications, using a semiformal notation and methods. This level provides a high level of assurance. This level corresponds to EAL5. Semiformally verified design and tested: The product or system is tested against its functional, high- level, low-level, and implementation specifications, using a semiformal notation and methods, and verified against a formal security model. This level provides a higher level of assurance. This level corresponds to EAL6. Formally verified design and tested: The product or system is tested against its functional, high-level, low-level, and implementation specifications, using a formal notation and methods, and verified against a formal security model. This level provides the highest level of assurance. This level corresponds to EAL7. Reference: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3: Security Engineering, Section: Security Evaluation Models, Subsection: Common Criteria; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Engineering, Section: Evaluation Criteria. Page 135 Question: 125 www.certifiedumps.com
  • 136. Questions & Answers PDF Limiting the processor, memory, and Input/output (I/O) capabilities of mobile code is known as A. code restriction. B. on-demand compile. Which of the following is MOST important when determining appropriate countermeasures for an identified risk? A. Interaction with existing controls B. Cost C. Organizational risk tolerance D. Patch availability Explanation: The second phase of public key infrastructure (PKI) key/certificate life-cycle management is the issued phase, where the certificate authority (CA) issues a digital certificate to the requester after verifying their identity and public key. The certificate contains the public key, the identity of the owner, the validity period, the serial number, and the digital signature of the CA. The certificate is then published in a repository or directory for others to access and validate. Reference: CISSP Study Guide: Key Management Life Cycle, Key Management - OWASP Cheat Sheet Series, CISSP 2021: Software Development Lifecycles & Ecosystems Explanation: The most important factor when determining appropriate countermeasures for an identified risk is the organizational risk tolerance, which is the level of risk that the organization is willing to accept or reject. The risk tolerance reflects the organization’s mission, objectives, culture, and values, and influences the selection and implementation of security controls. The risk tolerance also helps to balance the cost and benefit of the countermeasures, as well as the interaction with existing controls and the availability of patches. Reference: CISSP domain 1: Security and risk management, Risk management concepts and the CISSP (part 1), Learn About the Different Types of Risk Analysis in CISSP, Risk Response, countermeasures, considerations and controls, The 8 CISSP Domains Explained Page 136 Question: 126 Question: 127 Answer: C Answer: A www.certifiedumps.com
  • 137. Questions & Answers PDF C. sandboxing. D. compartmentalization. Which of the following security testing strategies is BEST suited for companies with low to moderate security maturity? A. Load Testing B. White-box testing C. Black -box testing D. Performance testing Explanation: Mobile code is a term that refers to any code that can be transferred from one system to another and executed on the target system, such as Java applets, ActiveX controls, or JavaScript scripts. Limiting the processor, memory, and input/output (I/O) capabilities of mobile code is known as sandboxing. Sandboxing is a security technique that isolates the mobile code from the rest of the system and restricts its access to the system resources, such as files, network, or registry. Sandboxing can prevent the mobile code from causing harm or damage to the system, such as installing malware, stealing data, or modifying settings. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 431; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8: Software Development Security, page 571] Explanation: Black-box testing is a security testing strategy that simulates an external attack on a system or application, without any prior knowledge of its internal structure, design, or implementation. Black- box testing is best suited for companies with low to moderate security maturity, as it can reveal the most obvious and common vulnerabilities, such as misconfigurations, default credentials, or unpatched software. Black-box testing can also provide a realistic assessment of the system’s security posture from an attacker’s perspective. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 287; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 413] Page 137 Question: 128 Question: 129 Answer: C Answer: C www.certifiedumps.com
  • 138. Questions & Answers PDF Which of the following are core categories of malicious attack against Internet of Things (IOT) devices? A. Packet capture and false data injection B. Packet capture and brute force attack C. Node capture 3nd Structured Query Langue (SQL) injection D. Node capture and false data injection What is the document that describes the measures that have been implemented or planned to correct any deficiencies noted during the assessment of the security controls? A. Business Impact Analysis (BIA) B. Security Assessment Report (SAR) C. Plan of Action and Milestones {POA&M) D. Security Assessment Plan (SAP) Explanation: Node capture and false data injection are core categories of malicious attack against Internet of Things (IoT) devices. Node capture is an attack that compromises a physical IoT device and gains access to its data, configuration, or functionality. False data injection is an attack that alters or fabricates the data transmitted or received by an IoT device, which can affect the integrity, availability, or reliability of the IoT system. These attacks can have serious consequences for IoT applications that involve critical infrastructure, health care, or smart cities. Reference: CISSP All-in- One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 195; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 269] Page 138 Explanation: The document that describes the measures that have been implemented or planned to correct any deficienc noted during the assessment of the security controls is the Plan of Action and Milestones (POA&M). A POA& tool that helps to track and manage the remediation actions for the identified weaknesses or gaps in the sec controls. A POA&M typically includes the following elements: the description of the weakness, the source of weakness, the risk level of the Question: 130 Answer: C Answer: B www.certifiedumps.com
  • 139. Questions & Answers PDF Which of the following is a characteristic of a challenge/response authentication process? A. Presenting distorted graphics of text for authentication B. Transmitting a hash based on the user's password C. Using a password history blacklist D. Requiring the use of non-consecutive numeric characters DRAG DROP Given a file containing ordered number, i.e. “123456789,” match each of the following redundant Array of independent Disks (RAID) levels to the corresponding visual representation visual representation. Note: P() = parity. Drag each level to the appropriate place on the diagram. Explanation: A characteristic of a challenge/response authentication process is transmitting a hash based on the user’s password. A challenge/response authentication process is a type of authentication method that involves the exchange of a challenge and a response between the authenticator and the authenticatee. The challenge is usually a random or unpredictable value, such as a nonce or a timestamp, that is sent by the authenticator to the authenticatee. The response is usually a value that is derived from the challenge and the user’s password, such as a hash or a message authentication code (MAC), that is sent by the authenticatee to the authenticator. The authenticator then verifies the response by applying the same algorithm and password to the challenge, and comparing the results. If the response matches the expected value, the authentication is successful. Transmitting a hash based on the user’s password can provide a secure and efficient way of proving the user’s identity, without revealing the password in plaintext or requiring the storage of the password on the authenticator. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 208; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Identity and Access Management, page 297] weakness, the proposed corrective action, the responsible party, the estimated completion date, and the status of the action. A POA&M can help to prioritize the remediation efforts, monitor the progress, and report the results. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 295; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6: Security Assessment and Testing, page 421] Page 139 Question: 131 Question: 132 Answer: B www.certifiedumps.com
  • 140. Questions & Answers PDF Explanation: Drag each level to the appropriate place on the diagram. RAID 1 -> Top left RAID 0 -> Top right RAID 5 -> Bottom left RAID 10 -> Bottom right Comprehensive Explanation: The correct answer is to drag each level to the appropriate place on the diagram as shown below: ![RAID levels] The rationale for the answer is based on the definition and characteristics of each RAID level and the given file containing ordered numbers. RAID stands for Redundant Array of Independent Disks, and it is a technology that combines multiple physical disks into a logical unit that provides improved performance, reliability, or capacity. RAID levels are the different ways of organizing and distributing data across the disks, using techniques such as mirroring, striping, or parity. Mirroring means creating an exact copy of the data on another disk, which provides fault tolerance and redundancy. Striping means dividing the data into blocks and spreading them across multiple disks, which provides speed and performance. Parity means calculating and storing an extra bit of information that can be used to reconstruct the data in case of a disk failure, which provides error correction and fault tolerance. Page 140 Answer: www.certifiedumps.com
  • 141. Questions & Answers PDF Page 141 RAID 1 is a RAID level that uses mirroring to create an exact copy of the data on another disk. RAID 1 requires at least two disks, and it provides high reliability and availability, as the data can be accessed from either disk if one fails. However, RAID 1 does not provide any performance improvement, and it has a high storage overhead, as it duplicates the data. In the diagram, RAID 1 is represented by two disks with identical data (123456789). RAID 0 is a RAID level that uses striping to divide the data into blocks and spread them across multiple disks. RAID 0 requires at least two disks, and it provides high performance and speed, as the data can be read or written in parallel from multiple disks. However, RAID 0 does not provide any fault tolerance or redundancy, and it has a high risk of data loss, as the failure of any disk will result in the loss of the entire data. In the diagram, RAID 0 is represented by two disks with data split between them (123 and 456789). RAID 5 is a RAID level that uses striping with parity to distribute the data and the parity information across multiple disks. RAID 5 requires at least three disks, and it provides a balance of performance, reliability, and capacity, as the data can be read or written in parallel from multiple disks, and the data can be recovered from the parity information if one disk fails. However, RAID 5 has a performance penalty for write operations, as it requires extra calculations and disk operations to update the parity information. In the diagram, RAID 5 is represented by three disks where data is striped across two disks (123 and 789), and the third disk contains parity information (P(456+789) and P(123+456)). RAID 10 is a RAID level that combines RAID 1 and RAID 0, meaning that it uses mirroring and striping to create a nested array of disks. RAID 10 requires at least four disks, and it provides high performance, reliability, and availability, as the data can be read or written in parallel from multiple mirrored disks, and the data can be accessed from either disk if one fails. However, RAID 10 has a high storage overhead, as it duplicates the data, and it requires more disks and controllers to implement. In the diagram, RAID 10 is represented by four disks combining both mirroring and striping techniques (123 and 123, 456789 and 456789). Reference: [RAID] [RAID Levels Explained] [RAID 0, RAID 1, RAID 5, RAID 10 Explained with Diagrams] www.certifiedumps.com
  • 142. Questions & Answers PDF Which of the following media is LEAST problematic with data remanence? A. Dynamic Random Access Memory (DRAM) B. Electrically Erasable Programming Read-Only Memory (BPRCM) C. Flash memory D. Magnetic disk Explanation: Dynamic Random Access Memory (DRAM) is the least problematic with data remanence. Data remanence is the residual representation of data that remains on a storage medium after it has been erased or overwritten. Data remanence poses a security risk, as it may allow unauthorized access or recovery of sensitive data. DRAM is a type of volatile memory that requires constant power to retain data. Once the power is turned off, the data stored in DRAM is quickly lost, making it difficult to recover or analyze. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 160. CISSP Testking ISC Exam Questions, Question 10. Page 142 Question: 133 Question: 134 Answer: A www.certifiedumps.com
  • 143. Questions & Answers PDF Which of the following needs to be taken into account when assessing vulnerability? A. Risk identification and validation B. Threat mapping C. Risk acceptance criteria D. Safeguard selection Organization A is adding a large collection of confidential data records that it received when it acquired Organization B to its data store. Many of the users and staff from Organization B are no longer available. Which of the following MUST Organization A 0do to property classify and secure the acquired data? A. Assign data owners from Organization A to the acquired data. B. Create placeholder accounts that represent former users from Organization B. C. Archive audit records that refer to users from Organization A. D. Change the data classification for data acquired from Organization B. Explanation: Risk identification and validation are the factors that need to be taken into account when assessing vulnerability. A vulnerability is a weakness or a flaw in a system or an application that can be exploited by an attacker to compromise the security or the functionality of the system or the application. Vulnerability assessment is the process of identifying, analyzing, and evaluating the vulnerabilities that may affect the system or the application. Vulnerability assessment is part of the risk management process, which is the process of identifying, assessing, and mitigating the risks that may affect the organization’s information systems and assets. Risk identification and validation are the steps in the risk management process that involve identifying the potential sources and causes of risk, such as threats, vulnerabilities, and impacts, and validating the accuracy and the relevance of the risk information. Risk identification and validation can help determine the scope and the priority of the vulnerability assessment, and ensure that the vulnerability assessment results are consistent and reliable. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 5. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16. Page 143 Question: 135 Answer: A www.certifiedumps.com
  • 144. Questions & Answers PDF Which of the following is considered the PRIMARY security issue associated with encrypted e-mail messages? A. Key distribution B. Storing attachments in centralized repositories C. Scanning for viruses and other malware D. Greater costs associated for backups and restores Explanation: Encrypted e-mail messages are e-mail messages that are protected by encryption, which is a method of transforming the plaintext into ciphertext, using a secret key and an algorithm. Encryption ensures the confidentiality, integrity, and authenticity of the e-mail messages, as only the authorized parties can decrypt and read the messages, and any modification or forgery of the messages can be detected. The primary security issue associated with encrypted e-mail messages is key distribution, which is the process of securely exchanging the secret keys between the sender and the receiver of the e-mail messages. Key distribution is challenging, as it requires a secure and reliable channel, a trusted third party, or a public key infrastructure (PKI) to ensure that the keys are not compromised, intercepted, or tampered with. If the keys are not distributed properly, the encrypted e-mail messages may not be decrypted or verified by the intended parties, or may be decrypted or forged by the unauthorized parties. Storing attachments in centralized repositories is not a security issue associated with encrypted e-mail messages, as it is a method of reducing the size and the bandwidth of the e-mail messages, by storing the attachments in a cloud service or a file server, and sending only the links to the attachments in the e-mail messages. Scanning for viruses and other malware is not a security issue associated with encrypted e-mail messages, as it is a method of detecting and Explanation: Data ownership is a key concept in data security and classification. Data owners are responsible for defining the value, sensitivity, and classification of the data, as well as the access rights and controls for the data. When Organization A acquires data from Organization B, it should assign data owners from its own organization to the acquired data, so that they can properly classify and secure the data according to Organization A’s policies and standards. Creating placeholder accounts, archiving audit records, or changing the data classification are not sufficient or necessary steps to ensure the security of the acquired data. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, page 67; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 2: Asset Security, Question 2.4, page 76. Page 144 Question: 136 Answer: A Answer: A www.certifiedumps.com
  • 145. The Rivest-Shamir-Adleman (RSA) algorithm is BEST suited for which of the following operations? A. Bulk data encryption and decryption B. One-way secure hashing for user and message authentication C. Secure key exchange for symmetric cryptography D. Creating digital checksums for message integrity Explanation: A security professional has been requested by the Board of Directors and Chief Information Security Officer (CISO) to perform an internal and external penetration test. A penetration test is a type of security assessment that simulates a real-world attack on a system or a network, to identify and exploit the vulnerabilities or weaknesses that may compromise the security. An internal penetration test is performed from within the system or the network, to assess the security from the perspective of an authorized user or an insider. An external penetration test is performed from outside the system or the network, to assess the security from the perspective of an unauthorized user or an outsider. The best course of action for the security professional is to review corporate security policies and procedures, before performing the penetration test. The corporate security policies and procedures are the documents that define the security goals, objectives, standards, and guidelines of the organization, and that specify the roles, responsibilities, and expectations of the security personnel and the stakeholders. The review of the corporate security policies and procedures will help the security professional to understand the scope, objectives, and methodology of the penetration test, and to ensure that the penetration test is aligned with the organization’s security requirements and compliance. The review of the corporate security policies and procedures will also help the security professional to obtain the necessary authorization, approval, and consent from the organization and the stakeholders, to perform the penetration test legally and ethically. Reviewing data localization requirements and regulations is not the best course of action for the security professional, as it is the process of identifying and complying with the laws and regulations that govern the collection, storage, and processing of the data in different jurisdictions. Reviewing data localization requirements and regulations is important for the security professional, but it is not the Questions & Answers PDF Page 145 removing the malicious code that may be embedded in the e-mail messages or the attachments. Greater costs associated for backups and restores is not a security issue associated with encrypted e- mail messages, as it is a method of preserving and recovering the e-mail messages or the attachments in case of a data loss or a disaster. Reference: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Engineering, page 105. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Cryptography and Symmetric Key Algorithms, page 204. Question: 137 Answer: C www.certifiedumps.com
  • 146. The disaster recovery (DR) process should always include A. plan maintenance. B. periodic vendor review. C. financial data analysis. D. periodic inventory review. Explanation: The disaster recovery (DR) process should always include plan maintenance. Plan maintenance is the process of updating, reviewing, testing, and improving the DR plan to ensure its effectiveness and Questions & Answers PDF Page 146 first step before performing the penetration test. Reviewing data localization requirements and regulations is more relevant for the data protection and privacy aspects of the security, not for the penetration testing aspects of the security. With notice to the Configuring a Wireless Access Point (WAP) with the same Service Set Identifier external test is not a valid option, as it is not a coherent or meaningful sentence. Configuring a Wireless Access Point (WAP) with the same Service Set Identifier (SSID) is a process of setting up a wireless network device with a network name, to allow wireless devices to connect to the network. This has nothing to do with performing a penetration test, or with giving notice to the organization or the stakeholders. With notice to the organization, perform an external penetration test first, then an internal test is not the best course of action for the security professional, as it is not the first step before performing the penetration test. Giving notice to the organization is important for the security professional, as it informs the organization and the stakeholders about the purpose, scope, and timing of the penetration test, and it also helps to avoid any confusion, disruption, or conflict with the normal operations of the system or the network. However, giving notice to the organization is not the first step before performing the penetration test, as the security professional should first review the corporate security policies and procedures, and obtain the necessary authorization, approval, and consent from the organization and the stakeholders. Performing an external penetration test first, then an internal test is not the best course of action for the security professional, as it is not the first step before performing the penetration test. Performing an external penetration test first, then an internal test is a possible way of conducting the penetration test, but it is not the only way. The order and the method of performing the penetration test may vary depending on the objectives, scope, and methodology of the penetration test, and the security professional should follow the corporate security policies and procedures, and the best practices and standards of the penetration testing industry. Reference: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 291. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 353. Question: 138 Answer: A www.certifiedumps.com
  • 147. Questions & Answers PDF A developer begins employment with an information technology (IT) organization. On the first day, the developer works through the list of assigned projects and finds that some files within those projects aren't accessible, Other developers working on the same project have no trouble locating and working on the. What is the MOST likely explanation for the discrepancy in access? A. The IT administrator had failed to grant the developer privileged access to the servers. B. The project files were inadvertently deleted. C. The new developer's computer had not been added to an access control list (ACL). D. The new developer's user account was not associated with the right roles needed for the projects. efficiency. Plan maintenance is essential for the DR process, as it helps to keep the DR plan aligned with the current business needs, objectives, and environment, as well as the best practices and standards. Plan maintenance also helps to identify and resolve any gaps, issues, or weaknesses in the DR plan, as well as to incorporate any feedback, lessons learned, or changes from the previous DR tests or events. Plan maintenance should be performed regularly, as well as after any significant changes in the organization, such as new systems, applications, processes, or personnel. Periodic vendor review, financial data analysis, and periodic inventory review are not activities that should always be included in the DR process. Periodic vendor review is the process of evaluating the performance, quality, and reliability of the vendors that provide services or products to the organization, such as backup, recovery, or cloud services. Periodic vendor review is important for the DR process, as it helps to ensure that the vendors meet the contractual obligations and service level agreements (SLAs) of the organization, as well as to identify and mitigate any risks or issues associated with the vendors. However, periodic vendor review is not a mandatory activity for the DR process, as it depends on the organization’s reliance on external vendors for its DR strategy. Financial data analysis is the process of examining, interpreting, and reporting the financial data of the organization, such as revenue, expenses, assets, liabilities, or cash flow. Financial data analysis is important for the DR process, as it helps to determine the budget, resources, and priorities for the DR plan, as well as to measure the financial impact and return on investment (ROI) of the DR plan. However, financial data analysis is not a mandatory activity for the DR process, as it depends on the organization’s financial goals and constraints for its DR strategy. Periodic inventory review is the process of verifying, updating, and documenting the inventory of the organization, such as hardware, software, data, or supplies. Periodic inventory review is important for the DR process, as it helps to ensure that the organization has the adequate and accurate inventory for its DR plan, as well as to identify and address any inventory shortages, surpluses, or discrepancies. However, periodic inventory review is not a mandatory activity for the DR process, as it depends on the organization’s inventory management and control for its DR strategy. Reference: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7, Security Operations, page 734. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 696. Page 147 Question: 139 www.certifiedumps.com
  • 148. Questions & Answers PDF Which of the following is MOST likely the cause of the issue? A. Channel overlap B. Poor signal C. Incorrect power settings D. Wrong antenna type A technician is troubleshooting a client's report about poor wireless performance. Using a client monitor, the technician notes the following information: Explanation: The most likely explanation for the discrepancy in access is that the new developer’s user account was not assigned the appropriate roles that correspond to the access rights for the project files. Roles are a way of grouping users based on their functions or responsibilities within an organization, and they can simplify the administration of access control policies. If the new developer’s user account was not associated with the right roles, he or she would not be able to access the files that other developers with the same roles can access. Reference: CISSP - Certified Information Systems Security Professional, Domain 5. Identity and Access Management (IAM), 5.1 Control physical and logical access to assets, 5.1.2 Manage identification and authentication of people, devices and services, 5.1.2.1 Identity management implementation; CISSP Exam Outline, Domain 5. Identity and Access Management (IAM), 5.1 Control physical and logical access to assets, 5.1.2 Manage identification and authentication of people, devices and services, 5.1.2.1 Identity management implementation Page 148 Explanation: The most likely cause of the issue is channel overlap. Channel overlap occurs when multiple wireless access (WAPs) use the same or adjacent frequency channels, causing interference and degradation of the wireless s The image shows that there are four WAPs with the same SSID (Corporate) using channels 9, 10, 11, and 6. T channels are too close to each other and overlap in the 2.4GHz band, resulting in poor wireless performance Question: 140 Answer: A Answer: D www.certifiedumps.com
  • 149. Why is authentication by ownership stronger than authentication by knowledge? A. It is easier to change. B. It can be kept on the user's person. C. It is more difficult to duplicate. D. It is simpler to control. Explanation: Authentication by ownership is stronger than authentication by knowledge because it is more difficult to duplicate. Authentication by ownership is a type of authentication that relies on something that the user possesses, such as a smart card, a token, or a biometric feature. Authentication by knowledge is a type of authentication that relies on something that the user knows, such as a password, a PIN, or a security question. Authentication by ownership is more difficult to duplicate than authentication by knowledge, as it requires physical access, specialized equipment, or sophisticated techniques to copy or forge the authentication factor. Authentication by knowledge is easier to duplicate than authentication by ownership, as it may be guessed, cracked, or stolen by various methods, such as brute force, social engineering, or phishing. Authentication by ownership is not necessarily easier to change, simpler to control, or more convenient to keep on the user’s person than authentication by knowledge, as these factors may depend on the specific Questions & Answers PDF Page 149 The issue can be resolved by changing the channels of the WAPs to non-overlapping ones, such as 1, 6, and 11. Reference: [CISSP - Certified Information Systems Security Professional], Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points; [CISSP Exam Outline], Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points velopment Security, 8.1 Understand and integrate security in the software development life cycle, 8.1.1 Identify and apply security controls in development environments, 8.1.1.2 Security of the software environments; CISSP Exam Outline, Domain 8. Software Development Security, 8.1 Understand and integrate security in the software development life cycle, 8.1.1 Identify and apply security controls in development environments, 8.1.1.2 Security of the software environments Question: 141 Answer: C www.certifiedumps.com
  • 150. Questions & Answers PDF An organization's retail website provides its only source of revenue, so the disaster recovery plan (DRP) must document an estimated time for each step in the plan. Which of the following steps in the DRP will list the GREATEST duration of time for the service to be fully operational? A. Update the Network Address Translation (NAT) table. B. Update Domain Name System (DNS) server addresses with domain registrar. C. Update the Border Gateway Protocol (BGP) autonomous system number. D. Update the web server network adapter configuration. Explanation: The step in the disaster recovery plan (DRP) that will list the greatest duration of time for the service to be fully operational is to update the Domain Name System (DNS) server addresses with the domain registrar. DNS is a system that translates domain names, such as www.example.com, into IP addresses, such as 192.168.1.1, and vice versa. DNS enables users to access websites or services by using human-readable names, rather than numerical addresses. A domain registrar is an entity that manages the registration and reservation of domain names, and that maintains the records of the domain names and their corresponding DNS servers. A DNS server is a server that stores and provides the DNS records for a domain name, such as the IP address, the mail server, or the name server. In a disaster recovery scenario, where the primary website or service is unavailable or inaccessible due to a disaster, such as a fire, a flood, or a cyberattack, the DRP may involve switching to a backup or an alternate website or service that is hosted on a different location or a different provider. In order to do that, the DRP must update the DNS server addresses with the domain registrar, so that the domain name of the website or service points to the new IP address of the backup or the alternate website or service. However, this step may take a long time, as it depends on the propagation or the update of the DNS records across the internet, which may vary from Page 150 Question: 142 Answer: B www.certifiedumps.com
  • 151. A cybersecurity engineer has been tasked to research and implement an ultra-secure communications channel to protect the organization's most valuable intellectual property (IP). The primary directive in this initiative is to ensure there Is no possible way the communications can be intercepted without detection. Which of the following Is the only way to ensure this ‘outcome? A. Diffie-Hellman key exchange B. Symmetric key cryptography C. [Public key infrastructure (PKI) D. Quantum Key Distribution Explanation: The only way to ensure an ultra-secure communications channel that cannot be intercepted without detection is to use Quantum Key Distribution (QKD). QKD is a technique that uses the principles of quantum mechanics to generate and exchange cryptographic keys between two parties. QKD relies on the properties of quantum particles, such as photons or electrons, to encode and transmit the keys. QKD offers the following advantages for securing communications: It provides unconditional security, as the keys are generated and exchanged in a random and unpredictable manner, and cannot be computed or guessed by any algorithm or attacker. It ensures perfect secrecy, as the keys are used only once and then discarded, and cannot be reused or intercepted by any eavesdropper. It enables detection of intrusion, as any attempt to observe or measure the quantum particles will alter their state and introduce errors or anomalies in the communication, which can be noticed and reported by the legitimate parties. QKD is currently limited by the distance, speed, and cost of the quantum communication channels, but it is expected to become more feasible and widespread in Questions & Answers PDF Page 151 BGP is a protocol that exchanges or advertises the routing information or the paths between different autonomous systems or networks on the internet, such as ISPs, cloud providers, or enterprises. BGP enables the optimal and efficient routing of the network traffic across the internet. A web server network adapter is a hardware device that connects the web server to the network, and that enables the web server to send or receive the network packets, such as HTTP requests or responses. Updating the NAT table, the BGP autonomous system number, or the web server network adapter configuration may be part of the DRP, but they will not list the greatest duration of time for the service to be fully operational, as they can be done quickly or locally, and they do not depend on the propagation or the update of the DNS records across the internet. Reference: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 19: Security Operations, page 1869. Question: 143 Answer: D www.certifiedumps.com
  • 152. Questions & Answers PDF Which of the following BEST obtains an objective audit of security controls? A. The security audit is measured against a known standard. B. The security audit is performed by a certified internal auditor. C. The security audit is performed by an independent third-party. D. The security audit produces reporting metrics for senior leadership. the future, especially with the development of quantum networks and quantum computers. Reference: CISSP All-in-One Exam Guide, Chapter 4: Communication and Network Security, Section: Quantum Cryptography, pp. 252-253. Explanation: Software vulnerability remediation is the process of identifying and fixing the weaknesses or flaws in a software application or system that could be exploited by attackers. Software vulnerability remediation is most likely to cost the least to implement at the design stage of the Software Development Life Cycle (SDLC), which is the phase where the requirements and specifications of the software are defined and the architecture and components of the software are designed. At this stage, the software developers can apply security principles and best practices, such as secure by design, secure by default, and secure coding, to prevent or minimize the introduction of vulnerabilities in the software. Remediation at the design stage is also easier and cheaper than at later stages, such as development, testing, or deployment, because it does not require modifying or rewriting the existing code, which could introduce new errors or affect the functionality or performance of the software. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2021-2022; [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Domain 8: Software Development Security, pp. 1395-1396. At what stage of the Software Development Life Cycle (SDLC) does software vulnerability remediation MOST likely cost the least to implement? A. Development B. Testing C. Deployment D. Design Page 152 Question: 144 Question: 145 Answer: D www.certifiedumps.com
  • 153. Questions & Answers PDF Which of the following is the MOST effective way to ensure the endpoint devices used by remote users are compliant with an organization's approved policies before being allowed on the network? A. Group Policy Object (GPO) B. Network Access Control (NAC) C. Mobile Device Management (MDM) D. Privileged Access Management (PAM) Explanation: The best option that obtains an objective audit of security controls is to have the security audit performed by an independent third-party. An independent third-party is an entity that is not affiliated with or influenced by the organization or the system that is being audited, and that has the expertise and credibility to conduct the security audit. An independent third-party can provide an unbiased and impartial assessment of the security controls, and identify the strengths and weaknesses of the system or network. An independent third-party can also provide recommendations and best practices for improving the security posture of the system or network. The other options are not as effective, because they may not be objective, consistent, or comprehensive in their audit of security controls . Reference: [CISSP CBK, Fifth Edition, Chapter 1, page 49]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 19]. Explanation: The most effective way to ensure the endpoint devices used by remote users are compliant with an organization’s approved policies before being allowed on the network is to use Network Access Control (NAC). NAC is a security technique that involves verifying and enforcing the compliance of the endpoint devices with the security policies and standards of the organization, before granting them access to the network. NAC can check the attributes and characteristics of the endpoint devices, such as device type, operating system, IP address, MAC address, or user identity, and compare them with the predefined criteria and rules. NAC can also perform the network access authentication and authorization, and the network health and compliance checks, such as antivirus, firewall, or patch status. NAC can help to ensure the endpoint devices used by remote users are compliant with an organization’s approved policies, as it can prevent or restrict the access of any non- compliant or unauthorized endpoint devices, and reduce the security risks and vulnerabilities of the network . Reference: [CISSP CBK, Fifth Edition, Chapter 4, page 378]; [CISSP Practice Exam – FREE 20 Questions and Answers, Question 19]. Page 153 Question: 146 Answer: B Answer: C www.certifiedumps.com
  • 154. Questions & Answers PDF Which of the following is the MAIN benefit of off-site storage? A. Cost effectiveness B. Backup simplicity C. Fast recovery D. Data availability Which of the following would an information security professional use to recognize changes to content, particularly unauthorized changes? A. File Integrity Checker B. Security information and event management (SIEM) system C. Audit Logs D. Intrusion detection system (IDS) Explanation: The main benefit of off-site storage is data availability. Off-site storage is a technique that involves storing backup data or copies of data in a different location than the primary data source, such as a Explanation: The tool that an information security professional would use to recognize changes to content, particularly unauthorized changes, is a File Integrity Checker. A File Integrity Checker is a type of security tool that monitors and verifies the integrity and authenticity of the files or content, by comparing the current state or version of the files or content with a known or trusted baseline or reference, using various methods, such as checksums, hashes, or signatures. A File Integrity Checker can recognize changes to content, particularly unauthorized changes, by detecting and reporting any discrepancies or anomalies between the current state or version and the baseline or reference, such as the addition, deletion, modification, or corruption of the files or content. A File Integrity Checker can help to prevent or mitigate the unauthorized changes to content, by alerting the information security professional, and by restoring the files or content to the original or desired state or version . Reference: [CISSP CBK, Fifth Edition, Chapter 3, page 245]; [100 CISSP Questions, Answers and Explanations, Question 18]. Page 154 Question: 147 Question: 148 Answer: C Answer: D www.certifiedumps.com
  • 155. Which of the following attack types can be used to compromise the integrity of data during transmission? A. Keylogging B. Packet sniffing C. Synchronization flooding D. Session hijacking Explanation: Packet sniffing is a type of attack that involves intercepting and analyzing the network traffic that is transmitted between hosts. Packet sniffing can be used to compromise the integrity of data during transmission, as the attacker can modify, delete, or inject packets into the network stream. Packet sniffing can also be used to compromise the confidentiality and availability of data, as the attacker can read, copy, or block packets. Keylogging, synchronization flooding, and session hijacking are all types of attacks, but they do not directly affect the integrity of data during transmission. Keylogging is a type of attack that involves capturing and recording the keystrokes of a user on a device. Synchronization flooding is a type of attack that involves sending a large number of SYN packets to a target host, causing it to exhaust its resources and deny service to legitimate requests. Session hijacking is a type of attack that involves taking over an existing session between a user and a web service, and impersonating the user or the service. Questions & Answers PDF Page 155 remote data center, a cloud storage service, or a tape vault. Off-site storage can improve data availability, which is the ability to access or use the data when needed, by providing an alternative source of data in case of a disaster or an outage that affects the primary data source. Off-site storage can also protect the data from theft, fire, flood, or other physical threats that may occur at the primary data location. The other options are not the main benefits of off-site storage. Cost effectiveness is not a benefit of off-site storage, as it may incur additional costs for the transportation, maintenance, or security of the data. Backup simplicity is not a benefit of off-site storage, as it may require more planning, coordination, or synchronization of the data. Fast recovery is not a benefit of off-site storage, as it may depend on the distance, the bandwidth, or the format of the data. Reference: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1013. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1019. Question: 149 Question: 150 Answer: B www.certifiedumps.com
  • 156. Questions & Answers PDF Spyware is BEST described as A. data mining for advertising. B. A form of cyber-terrorism, C. An information gathering technique, D. A web-based attack. Explanation: Spyware is a type of malicious software that covertly collects and transmits information about the user’s activities, preferences, or behavior, without the user’s knowledge or consent. Spyware is best described as data mining for advertising, as the main purpose of spyware is to gather data that can be used for targeted marketing or advertising campaigns. Spyware can also compromise the security and privacy of the user, as it can expose sensitive or personal data, consume network bandwidth, or degrade system performance. Spyware is not a form of cyber-terrorism, as it does not intend to cause physical harm, violence, or fear. Spyware is not an information gathering technique, as it is not a legitimate or ethical method of obtaining data. Spyware is not a web-based attack, as it does not exploit the vulnerabilities of the web applications or protocols, but rather the vulnerabilities of the user’s system or browser. Page 156 Answer: A www.certifiedumps.com
  • 157. www.certifiedumps.com [Limited Time Offer] Use Coupon "cert20" for extra 20% discount on the purchase of PDF file. Test your CISSP preparation with actual exam questions https://www.certifiedumps.com/isc2/cissp-dumps.html Thank You for trying CISSP PDF Demo Start Your CISSP Preparation