When using Generic Routing Encapsulation (GRE) tunneling over Internet Protocol version 4 (IPv4), where is the GRE header inserted?
Into the options field
Between the delivery header and payload
Between the source and destination addresses
Into the destination address
Generic Routing Encapsulation (GRE) is a protocol that encapsulates a packet of one protocol type within another protocol type4. When using GRE tunneling over IPv4, the GRE header is inserted between the delivery header and the payload5. The delivery header contains the new source and destination IP addresses of the tunnel endpoints, while the payload contains the original IP packet4. The GRE header contains information such as protocol type, checksum, and key6.
Software Code signing is used as a method of verifying what security concept?
Integrity
Confidentiality
Availability
Access Control
Software code signing is used as a method of verifying the integrity of the software code. Integrity is the security concept that ensures that the data or code is not modified, corrupted, or tampered with by unauthorized parties. Software code signing is the process of attaching a digital signature to the software code, which is generated by applying a cryptographic hash function to the code and encrypting the hash value with the private key of the software developer or publisher. The digital signature can be verified by the software user or recipient by decrypting the signature with the public key of the developer or publisher and comparing the hash value with the hash value of the code.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 207; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 174
Which of the following is a remote access protocol that uses a static authentication?
Point-to-Point Tunneling Protocol (PPTP)
Routing Information Protocol (RIP)
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP)
Password Authentication Protocol (PAP) is a remote access protocol that uses a static authentication method, which means that the username and password are sent in clear text over the network. PAP is considered insecure and vulnerable to eavesdropping and replay attacks, as anyone who can capture the network traffic can obtain the credentials. PAP is supported by Point-to-Point Protocol (PPP), which is a common protocol for establishing remote connections over dial-up, broadband, or wireless networks. PAP is usually used as a fallback option when more secure protocols, such as Challenge Handshake Authentication Protocol (CHAP) or Extensible Authentication Protocol (EAP), are not available or compatible.
Which of the following is MOST important when deploying digital certificates?
Validate compliance with X.509 digital certificate standards
Establish a certificate life cycle management framework
Use a third-party Certificate Authority (CA)
Use no less than 256-bit strength encryption when creating a certificate
According to the CISSP All-in-One Exam Guide2, the most important thing when deploying digital certificates is to establish a certificate life cycle management framework. A digital certificate is a digital document that binds the identity and public key of an entity, such as a person, device, or organization, and is issued and signed by a trusted authority, such as a Certificate Authority (CA). A certificate life cycle management framework is a set of policies, processes, and procedures that define how the digital certificates are created, issued, distributed, stored, used, renewed, revoked, and expired. A certificate life cycle management framework helps to ensure that the digital certificates are valid, current, and trustworthy, and that they meet the security and operational requirements of the organization and the users. A certificate life cycle management framework also helps to prevent or mitigate the risks and challenges associated with the digital certificates, such as certificate expiration, compromise, misuse, or fraud. Validating compliance with X.509 digital certificate standards is not the most important thing when deploying digital certificates, although it is a good practice to follow. X.509 is a standard that defines the format and structure of the digital certificates, as well as the protocols and services for certificate management, such as Certificate Revocation List (CRL) and Online Certificate Status Protocol (OCSP). Validating compliance with X.509 helps to ensure that the digital certificates are interoperable and compatible with different systems and applications, but it does not address the entire life cycle of the digital certificates. Using a third-party CA is not the most important thing when deploying digital certificates, although it may be a convenient and cost-effective option for some organizations. A third-party CA is an external entity that provides the service of issuing and managing the digital certificates for other entities, such as customers or partners. Using a third-party CA may reduce the complexity and overhead of maintaining an internal CA, but it also introduces some challenges and risks, such as dependency, trust, liability, and compliance. Using no less than 256-bit strength encryption when creating a certificate is not the most important thing when deploying digital certificates, although it is a recommended practice to follow. The encryption strength of a certificate refers to the length and complexity of the encryption key that is used to protect the certificate data and signature. Using no less than 256-bit strength encryption helps to ensure that the certificate is resistant to brute-force attacks and other cryptographic attacks, but it does not address the entire life cycle of the digital certificates.
Which of the following analyses is performed to protect information assets?
Business impact analysis
Feasibility analysis
Cost benefit analysis
Data analysis
The analysis that is performed to protect information assets is the cost benefit analysis, which is a method of comparing the costs and benefits of different security solutions or alternatives. The cost benefit analysis helps to justify the investment in security controls and measures by evaluating the trade-offs between the security costs and the security benefits. The security costs include the direct and indirect expenses of acquiring, implementing, operating, and maintaining the security controls and measures. The security benefits include the reduction of risks, losses, and liabilities, as well as the enhancement of productivity, performance, and reputation. The other options are not the analysis that is performed to protect information assets, but rather different types of analyses. A business impact analysis is a method of identifying and quantifying the potential impacts of disruptive events on the organization’s critical business functions and processes. A feasibility analysis is a method of assessing the technical, operational, and economic viability of a proposed project or solution. A data analysis is a method of processing, transforming, and modeling data to extract useful information and insights. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 28; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, p. 21; CISSP practice exam questions and answers, Question 10.
A software security engineer is developing a black box-based test plan that will measure the system's reaction to incorrect or illegal inputs or unexpected operational errors and situations. Match the functional testing techniques on the left with the correct input parameters on the right.
The functional testing techniques and the input parameter selection criteria are based on the following definitions and examples:
References: CISSP Official (ISC)2 Practice Tests, Chapter 8, page 220; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 389
Which of the following roles has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization?
Data Custodian
Data Owner
Data Creator
Data User
The role that has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
How does Encapsulating Security Payload (ESP) in transport mode affect the Internet Protocol (IP)?
Encrypts and optionally authenticates the IP header, but not the IP payload
Encrypts and optionally authenticates the IP payload, but not the IP header
Authenticates the IP payload and selected portions of the IP header
Encrypts and optionally authenticates the complete IP packet
Encapsulating Security Payload (ESP) in transport mode affects the Internet Protocol (IP) by encrypting and optionally authenticating the IP payload, but not the IP header. ESP is a protocol that provides confidentiality, integrity, and authentication for data transmitted over a network. ESP can operate in two modes: transport mode and tunnel mode. In transport mode, ESP only protects the data or payload of the IP packet, while leaving the IP header intact and visible. This mode is suitable for end-to-end communication between two hosts. In tunnel mode, ESP protects the entire IP packet, including the header and the payload, by encapsulating it within another IP packet. This mode is suitable for gateway-to-gateway or host-to-gateway communication34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 345; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 464.
The application of a security patch to a product previously validate at Common Criteria (CC) Evaluation Assurance Level (EAL) 4 would
require an update of the Protection Profile (PP).
require recertification.
retain its current EAL rating.
reduce the product to EAL 3.
Common Criteria (CC) is an international standard for evaluating the security of IT products and systems. Evaluation Assurance Level (EAL) is a numerical grade that indicates the level of assurance and rigor of the evaluation process. EAL ranges from 1 (lowest) to 7 (highest). A product that has been validated at EAL 4 has been methodically designed, tested, and reviewed, and provides a moderate level of independently assured security. The application of a security patch to a product previously validated at EAL 4 would require recertification, as the patch may introduce new vulnerabilities or affect the security functionality of the product. The recertification process would ensure that the patched product still meets the EAL 4 requirements and does not compromise the security claims of the original evaluation. Updating the Protection Profile (PP), retaining the current EAL rating, or reducing the product to EAL 3 are not valid options, as they do not reflect the impact of the security patch on the product’s security assurance.
Which of the following BEST represents the concept of least privilege?
Access to an object is denied unless access is specifically allowed.
Access to an object is only available to the owner.
Access to an object is allowed unless it is protected by the information security policy.
Access to an object is only allowed to authenticated users via an Access Control List (ACL).
According to the CISSP CBK Official Study Guide1, the concept of least privilege means that users and processes should only have the minimum access required to perform their tasks, and no more. This reduces the risk of unauthorized or malicious actions, as well as the impact of potential incidents. One way to implement the principle of least privilege is to use a default-deny policy, which means that access to an object is denied unless access is specifically allowed. This is also known as a whitelist approach, which only grants access to predefined and authorized entities. Access to an object is only available to the owner is not a good representation of the concept of least privilege, as it may prevent legitimate access by other authorized users or processes. Access to an object is allowed unless it is protected by the information security policy is not a good representation of the concept of least privilege, as it may allow unnecessary or excessive access by default. This is also known as a blacklist approach, which only denies access to predefined and unauthorized entities. Access to an object is only allowed to authenticated users via an Access Control List (ACL) is not a good representation of the concept of least privilege, as it may not consider the authorization and accountability aspects of access control. Authentication is the process of verifying the identity of a user or process, while authorization is the process of granting or denying access based on the identity and the access policy. An ACL is a mechanism that defines the permissions and restrictions for accessing an object, but it does not necessarily enforce the principle of least privilege. References: 1
What does an organization FIRST review to assure compliance with privacy requirements?
Best practices
Business objectives
Legal and regulatory mandates
Employee's compliance to policies and standards
The first thing that an organization reviews to assure compliance with privacy requirements is the legal and regulatory mandates that apply to its business operations and data processing activities. Legal and regulatory mandates are the laws, regulations, standards, and contracts that govern how an organization must protect the privacy of personal information and the rights of data subjects. An organization must identify and understand the relevant mandates that affect its jurisdiction, industry, and data types, and implement the appropriate controls and measures to comply with them. The other options are not the first thing that an organization reviews, but rather part of the privacy compliance program. Best practices are the recommended methods and techniques for achieving privacy objectives, but they are not mandatory or binding. Business objectives are the goals and strategies that an organization pursues to create value and competitive advantage, but they may not align with privacy requirements. Employee’s compliance to policies and standards is the degree to which the organization’s staff adhere to the internal rules and guidelines for privacy protection, but it is not a review activity, but rather a measurement and enforcement activity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Which of the following is the PRIMARY concern when using an Internet browser to access a cloud-based service?
Insecure implementation of Application Programming Interfaces (API)
Improper use and storage of management keys
Misconfiguration of infrastructure allowing for unauthorized access
Vulnerabilities within protocols that can expose confidential data
The primary concern when using an Internet browser to access a cloud-based service is the vulnerabilities within protocols that can expose confidential data. Protocols are the rules and formats that govern the communication and exchange of data between systems or applications. Protocols can have vulnerabilities or flaws that can be exploited by attackers to intercept, modify, or steal the data. For example, some protocols may not provide adequate encryption, authentication, or integrity for the data, or they may have weak or outdated algorithms, keys, or certificates. When using an Internet browser to access a cloud-based service, the data may be transmitted over various protocols, such as HTTP, HTTPS, SSL, TLS, etc. If any of these protocols are vulnerable, the data may be compromised, especially if the data is sensitive or confidential. Therefore, it is important to use secure and updated protocols, as well as to monitor and patch any vulnerabilities12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 338; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 456.
Which of the following has the GREATEST impact on an organization's security posture?
International and country-specific compliance requirements
Security violations by employees and contractors
Resource constraints due to increasing costs of supporting security
Audit findings related to employee access and permissions process
The factor that has the greatest impact on an organization’s security posture is the international and country-specific compliance requirements. Compliance requirements are the rules or the regulations that an organization must follow or adhere to, in order to meet the standards or the expectations of the authorities or the stakeholders, such as the governments, the customers, or the auditors. Compliance requirements can vary depending on the location, the industry, or the type of the organization, and they can affect the security policies, the controls, or the practices of the organization. Compliance requirements can have a significant impact on the organization’s security posture, as they can influence the security objectives, the risks, or the resources of the organization, and they can also impose penalties or sanctions for non-compliance or violations. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 23; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31
How should an organization determine the priority of its remediation efforts after a vulnerability assessment has been conducted?
Use an impact-based approach.
Use a risk-based approach.
Use a criticality-based approach.
Use a threat-based approach.
According to the CISSP For Dummies4, the best way to determine the priority of the remediation efforts after a vulnerability assessment has been conducted is to use a risk-based approach. A vulnerability assessment is the process of identifying and measuring the weaknesses and exposures in a system, network, or application, that may be exploited by threats and cause harm to the organization or its assets. A risk-based approach is a method that prioritizes the remediation efforts based on the level of risk associated with each vulnerability, which is calculated by considering the impact and likelihood of the threat exploiting the vulnerability. A risk-based approach helps to allocate the resources and efforts to the most critical and urgent vulnerabilities, and to reduce the overall risk to an acceptable level. Using an impact-based approach is not the best way to determine the priority of the remediation efforts, as it only considers the potential consequences of the threat exploiting the vulnerability, but not the probability of the occurrence. An impact-based approach may overestimate or underestimate the risk level of some vulnerabilities, and may not reflect the true urgency and severity of the vulnerabilities. Using a criticality-based approach is not the best way to determine the priority of the remediation efforts, as it only considers the importance or value of the asset or system that is affected by the vulnerability, but not the threat or the vulnerability itself. A criticality-based approach may overestimate or underestimate the risk level of some vulnerabilities, and may not reflect the true urgency and severity of the vulnerabilities. Using a threat-based approach is not the best way to determine the priority of the remediation efforts, as it only considers the characteristics and capabilities of the threat that may exploit the vulnerability, but not the vulnerability or the impact itself. A threat-based approach may overestimate or underestimate the risk level of some vulnerabilities, and may not reflect the true urgency and severity of the vulnerabilities. References: 4
Which of the following methods can be used to achieve confidentiality and integrity for data in transit?
Multiprotocol Label Switching (MPLS)
Internet Protocol Security (IPSec)
Federated identity management
Multi-factor authentication
IPSec provides confidentiality and integrity for data in transit by encrypting and authenticating all IP packets exchanged between participating devices.
References:
A Simple Power Analysis (SPA) attack against a device directly observes which of the following?
Static discharge
Consumption
Generation
Magnetism
A Simple Power Analysis (SPA) attack against a device directly observes the consumption of power by the device. SPA is a type of side channel attack that exploits the variations in the power consumption of a device, such as a smart card or a cryptographic module, to infer information about the operations or data processed by the device. SPA can reveal the type, length, or sequence of instructions executed by the device, or the value of the secret key or data used by the device. The other options are not directly observed by SPA, but rather different aspects or effects of power. Static discharge is the sudden flow of electricity between two objects with different electric potentials. Generation is the process of producing electric power from other sources of energy. Magnetism is the physical phenomenon of attraction or repulsion between magnetic materials or fields. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, p. 525; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 163.
What is the MOST important element when considering the effectiveness of a training program for Business Continuity (BC) and Disaster Recovery (DR)?
Management support
Consideration of organizational need
Technology used for delivery
Target audience
The effectiveness of a BC/DR training program largely depends on management support because it ensures adequate resources, prioritization, and enforcement of policies are in place to make the training effective across the organization. References: ISC2 CISSP
The restoration priorities of a Disaster Recovery Plan (DRP) are based on which of the following documents?
Service Level Agreement (SLA)
Business Continuity Plan (BCP)
Business Impact Analysis (BIA)
Crisis management plan
According to the CISSP All-in-One Exam Guide, the restoration priorities of a Disaster Recovery Plan (DRP) are based on the Business Impact Analysis (BIA). A DRP is a document that defines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A restoration priority is the order or sequence in which the critical business processes and functions, as well as the supporting resources, such as data, systems, personnel, and facilities, are restored after a disaster. A BIA is a process that assesses the potential impact and consequences of a disaster on the organization’s business processes and functions, as well as the supporting resources. A BIA helps to identify and prioritize the critical business processes and functions, as well as the recovery objectives and time frames for them. A BIA also helps to determine the dependencies and interdependencies among the business processes and functions, as well as the supporting resources. Therefore, the restoration priorities of a DRP are based on the BIA, as it provides the information and analysis that are needed to plan and execute the recovery strategy. A Service Level Agreement (SLA) is not the document that the restoration priorities of a DRP are based on, although it may be a factor that influences the restoration priorities. An SLA is a document that defines the expectations and requirements for the quality and performance of a service or product that is provided by a service provider to a customer or client, such as the availability, reliability, scalability, or security of the service or product. An SLA may help to justify or support the restoration priorities of a DRP, but it does not provide the information and analysis that are needed to plan and execute the recovery strategy. A Business Continuity Plan (BCP) is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A BCP is a document that defines the procedures and actions to be taken to ensure the continuity of the essential business operations during and after a disaster. A BCP may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the continuity rather than the recovery of them. A BCP may also include other aspects or components that are not covered by a DRP, such as the prevention, mitigation, or response to a disaster. A crisis management plan is not the document that the restoration priorities of a DRP are based on, although it may be a document that is aligned with or integrated with a DRP. A crisis management plan is a document that defines the procedures and actions to be taken to manage and resolve a crisis or emergency situation that may affect the organization, such as a natural disaster, a cyberattack, or a pandemic. A crisis management plan may cover the same or similar business processes and functions, as well as the supporting resources, as a DRP, but it focuses on the management rather than the recovery of them. A crisis management plan may also include other aspects or components that are not covered by a DRP, such as the communication, coordination, or escalation of the crisis or emergency situation.
An organization regularly conducts its own penetration tests. Which of the following scenarios MUST be covered for the test to be effective?
Third-party vendor with access to the system
System administrator access compromised
Internal attacker with access to the system
Internal user accidentally accessing data
According to the CXL blog1, the scenario that must be covered for the penetration test to be effective is the third-party vendor with access to the system. A third-party vendor is an external entity or organization that provides a service or a product to the organization, such as a software developer, a cloud provider, or a payment processor. A third-party vendor with access to the system is a potential source of vulnerability or risk for the organization, as it may introduce or expose some weaknesses or flaws in the system, such as the configuration, the authentication, or the encryption of the system. A third-party vendor with access to the system may also be a target or a vector of attack for the malicious users or hackers, as it may be compromised or exploited to gain unauthorized or unintended access to the system, or to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. Therefore, the scenario of the third-party vendor with access to the system must be covered for the penetration test to be effective, as it helps to identify and assess the security gaps or issues that may arise from the third-party vendor’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. System administrator access compromised is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. A system administrator is an internal entity or person that manages and maintains the system, such as the network, the server, or the database of the organization. A system administrator access compromised is a scenario in which the system administrator’s account or credentials are stolen, hacked, or misused by the malicious users or hackers, who can then access or use the system with the system administrator’s privileges or permissions, such as creating, modifying, or deleting the users, the data, or the settings of the system. A system administrator access compromised is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the system administrator’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, a system administrator access compromised is not the scenario that must be covered for the penetration test to be effective, as it is not a common or realistic scenario that occurs in the real world, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal attacker is an internal entity or person that performs malicious actions or activities on the system, such as an employee, a contractor, or a partner of the organization. An internal attacker with access to the system is a scenario in which the internal attacker uses their legitimate or illegitimate access to the system to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. An internal attacker with access to the system is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal attacker’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal user is an internal entity or person that uses the system for legitimate purposes or functions, such as an employee, a contractor, or a partner of the organization. An internal user accidentally accessing data is a scenario in which the internal user unintentionally or mistakenly accesses or views the data or information on the system that they are not supposed to access or view, such as the confidential, sensitive, or personal data or information of the organization or the customers. An internal user accidentally accessing data is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal user’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, as it is not a malicious or intentional scenario that poses a serious threat or risk to the system, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. References: 1
A security professional has been asked to evaluate the options for the location of a new data center within a multifloor building. Concerns for the data center include emanations and physical access controls.
Which of the following is the BEST location?
On the top floor
In the basement
In the core of the building
In an exterior room with windows
The best location for a new data center within a multifloor building is in the core of the building. This location can minimize the emanations and enhance the physical access controls. Emanations are the electromagnetic signals or radiation that are emitted by electronic devices, such as computers, servers, or network equipment. Emanations can be intercepted or captured by attackers to obtain sensitive or confidential information. Physical access controls are the measures that prevent or restrict unauthorized or malicious access to physical assets, such as data centers, servers, or network devices. Physical access controls can include locks, doors, gates, fences, guards, cameras, alarms, etc. The core of the building is the central part of the building that is usually surrounded by other rooms or walls. This location can reduce the emanations by creating a shielding effect and increasing the distance from the potential attackers. The core of the building can also improve the physical access controls by limiting the entry points and visibility of the data center12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Engineering, p. 133; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3: Security Engineering, p. 295.
Which of the following BEST describes a rogue Access Point (AP)?
An AP that is not protected by a firewall
An AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES)
An AP connected to the wired infrastructure but not under the management of authorized network administrators
An AP infected by any kind of Trojan or Malware
A rogue Access Point (AP) is an AP connected to the wired infrastructure but not under the management of authorized network administrators. A rogue AP can pose a serious security threat, as it can allow unauthorized access to the network, bypass security controls, and expose sensitive data. The other options are not correct descriptions of a rogue AP. Option A is a description of an unsecured AP, which is an AP that is not protected by a firewall or other security measures. Option B is a description of an outdated AP, which is an AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES), which are weak encryption methods that can be easily cracked. Option D is a description of a compromised AP, which is an AP infected by any kind of Trojan or Malware, which can cause malicious behavior or damage to the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, p. 325; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, p. 241.
In the Software Development Life Cycle (SDLC), maintaining accurate hardware and software inventories is a critical part of
systems integration.
risk management.
quality assurance.
change management.
According to the CISSP CBK Official Study Guide1, the Software Development Life Cycle (SDLC) phase that requires maintaining accurate hardware and software inventories is change management. SDLC is a structured process that is used to design, develop, and test good-quality software. SDLC consists of several phases or stages that cover the entire life cycle of the software, from the initial idea or concept to the final deployment or maintenance of the software. SDLC aims to deliver high-quality, maintainable software that meets the user’s requirements and fits within the budget and schedule of the project. Change management is the process of controlling or managing the changes or modifications that are made to the software or the system during the SDLC, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Change management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the risks or the impacts of the changes or modifications that may affect or impair the software or the system, such as the errors, defects, or vulnerabilities of the software or the system. Maintaining accurate hardware and software inventories is a critical part of change management, as it provides or supports a reliable or consistent source or basis to identify or track the hardware and software components or elements that are involved or included in the software or the system, as well as the changes or modifications that are made to the hardware and software components or elements during the SDLC, such as the name, description, version, status, or value of the hardware and software components or elements of the software or the system. Maintaining accurate hardware and software inventories helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by enabling or facilitating the monitoring, evaluation, or improvement of the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the hardware and software components or elements of the software or the system. Systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Systems integration is the process of combining or integrating the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the interfaces, protocols, or standards of the project. Systems integration helps to ensure the functionality or the interoperability of the software or the system, as well as the compatibility or the consistency of the hardware and software components or elements of the software or the system, by ensuring or verifying that the hardware and software components or elements of the software or the system work or operate together or with other systems or networks, as intended or expected by the user or the client of the software or the system. Systems integration may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the systems integration, by controlling or managing the changes or modifications that are made to the hardware and software components or elements of the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of systems integration, which is to combine or integrate the hardware and software components or elements of the software or the system. Risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Risk management is the process of identifying, analyzing, evaluating, and treating the risks or the uncertainties that may affect or impair the software or the system, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Risk management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the impact or the consequence of the risks or the uncertainties that may harm or damage the software or the system, such as the threats, attacks, or incidents of the software or the system. Risk management may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the risk management, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of risk management, which is to identify, analyze, evaluate, and treat the risks or the uncertainties of the software or the system. Quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Quality assurance is the process of ensuring or verifying the quality or the performance of the software or the system, by using or applying the appropriate methods or mechanisms, such as the standards, criteria, or metrics of the project. Quality assurance helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or detecting the errors, defects, or vulnerabilities of the software or the system, by using or applying the appropriate methods or mechanisms, such as the testing, validation, or verification of the software or the system. Quality assurance may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the quality assurance, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of quality assurance, which is to ensure or verify the quality or the performance of the software or the system.
Which of the following are Systems Engineering Life Cycle (SELC) Technical Processes?
Concept, Development, Production, Utilization, Support, Retirement
Stakeholder Requirements Definition, Architectural Design, Implementation, Verification, Operation
Acquisition, Measurement, Configuration Management, Production, Operation, Support
Concept, Requirements, Design, Implementation, Production, Maintenance, Support, Disposal
The Systems Engineering Life Cycle (SELC) Technical Processes are the activities that transform stakeholder needs into a system solution. They include the following five processes: Stakeholder Requirements Definition, Architectural Design, Implementation, Verification, and Operation.
References:
Which of the following is BEST suited for exchanging authentication and authorization messages in a multi-party decentralized environment?
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup Language (SAML)
Internet Mail Access Protocol
Transport Layer Security (TLS)
Security Assertion Markup Language (SAML) is best suited for exchanging authentication and authorization messages in a multi-party decentralized environment. SAML is an XML-based standard that enables single sign-on (SSO) and federated identity management (FIM) between different domains and organizations. SAML allows a user to authenticate once at an identity provider (IdP) and access multiple service providers (SPs) without re-authenticating, by using assertions that contain information about the user’s identity, attributes, and privileges. SAML also allows SPs to request and receive authorization decisions from the IdP, based on the user’s access rights and policies. SAML is designed to support a decentralized and distributed environment, where multiple parties can exchange and verify the user’s identity and authorization information in a secure and interoperable manner. Lightweight Directory Access Protocol (LDAP) is not best suited for exchanging authentication and authorization messages in a multi-party decentralized environment, as it is a protocol that enables access and management of directory services, such as Active Directory or OpenLDAP. LDAP is used to store and retrieve information about users, groups, devices, and other objects in a hierarchical and structured manner, but it does not provide a mechanism for SSO or FIM across different domains and organizations. Internet Mail Access Protocol is not best suited for exchanging authentication and authorization messages in a multi-party decentralized environment, as it is a protocol that enables access and management of email messages stored on a remote server. IMAP is used to retrieve and manipulate email messages from multiple devices and clients, but it does not provide a mechanism for SSO or FIM across different domains and organizations. Transport Layer Security (TLS) is not best suited for exchanging authentication and authorization messages in a multi-party decentralized environment, as it is a protocol that provides security and encryption for data transmission over a network, such as the internet. TLS is used to establish a secure and authenticated channel between two parties, such as a web browser and a web server, but it does not provide a mechanism for SSO or FIM across different domains and organizations.
In the network design below, where is the MOST secure Local Area Network (LAN) segment to deploy a Wireless Access Point (WAP) that provides contractors access to the Internet and authorized enterprise services?
LAN 4
The most secure LAN segment to deploy a WAP that provides contractors access to the Internet and authorized enterprise services is LAN 4. A WAP is a device that enables wireless devices to connect to a wired network using Wi-Fi, Bluetooth, or other wireless standards. A WAP can provide convenience and mobility for the users, but it can also introduce security risks, such as unauthorized access, eavesdropping, interference, or rogue access points. Therefore, a WAP should be deployed in a secure LAN segment that can isolate the wireless traffic from the rest of the network and apply appropriate security controls and policies. LAN 4 is connected to the firewall that separates it from the other LAN segments and the Internet. This firewall can provide network segmentation, filtering, and monitoring for the WAP and the wireless devices. The firewall can also enforce the access rules and policies for the contractors, such as allowing them to access the Internet and some authorized enterprise services, but not the other LAN segments that may contain sensitive or critical data or systems34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 317; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 437.
A network scan found 50% of the systems with one or more critical vulnerabilities. Which of the following represents the BEST action?
Assess vulnerability risk and program effectiveness.
Assess vulnerability risk and business impact.
Disconnect all systems with critical vulnerabilities.
Disconnect systems with the most number of vulnerabilities.
The best action after finding 50% of the systems with one or more critical vulnerabilities is to assess the vulnerability risk and business impact. This means to evaluate the likelihood and severity of the vulnerabilities being exploited, as well as the potential consequences and costs for the business operations and objectives. This assessment can help prioritize the remediation efforts, allocate the resources, and justify the investments.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 343; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 304
What is the difference between media marking and media labeling?
Media marking refers to the use of human-readable security attributes, while media labeling refers to the use of security attributes in internal data structures.
Media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures.
Media labeling refers to security attributes required by public policy/law, while media marking refers to security required by internal organizational policy.
Media marking refers to security attributes required by public policy/law, while media labeling refers to security attributes required by internal organizational policy.
According to the CISSP CBK Official Study Guide1, the difference between media marking and media labeling is that media labeling refers to the use of human-readable security attributes, while media marking refers to the use of security attributes in internal data structures. Media marking and media labeling are two methods or techniques of applying security attributes to the media, which are the physical or tangible devices or materials that store or contain the data or information, such as the disks, tapes, or papers. Security attributes are the tags or markers that indicate the classification, sensitivity, or clearance of the media, data, or information, such as top secret, secret, or confidential. Security attributes help to protect the media, data, or information from unauthorized or unintended access, disclosure, modification, corruption, loss, or theft, as well as to support the access control and audit mechanisms. Media labeling is the method or technique of applying security attributes to the media in a human-readable form, such as the words, symbols, or colors that are printed, stamped, or affixed on the media. Media labeling helps to identify and distinguish the media, data, or information based on their security attributes, as well as to inform and instruct the users or handlers of the media, data, or information about the proper and secure handling and disposal of them. Media marking is the method or technique of applying security attributes to the media in an internal data structure form, such as the bits, bytes, or fields that are embedded, encoded, or encrypted in the media. Media marking helps to verify and validate the media, data, or information based on their security attributes, as well as to enforce and monitor the access control and audit mechanisms for them. Media marking refers to security attributes required by public policy/law, while media labeling refers to security required by internal organizational policy is not the difference between media marking and media labeling, as it is not related to the form or format of the security attributes, but to the source or authority of the security attributes. Media marking and media labeling may both refer to security attributes required by public policy/law, such as the Controlled Unclassified Information (CUI) or the Personal Identifiable Information (PII), or to security attributes required by internal organizational policy, such as the proprietary or confidential information. The difference between media marking and media labeling is not based on who or what requires the security attributes, but on how the security attributes are applied or represented on the media.
Which methodology is recommended for penetration testing to be effective in the development phase of the life-cycle process?
White-box testing
Software fuzz testing
Black-box testing
Visual testing
White-box testing is recommended during the development phase as it involves the examination of the application’s source code and design documents to identify vulnerabilities, ensuring that security is integrated into the development lifecycle. References: CISSP Official (ISC)2 Practice Tests, Chapter 8, page 219
What is the PRIMARY goal for using Domain Name System Security Extensions (DNSSEC) to sign records?
Integrity
Confidentiality
Accountability
Availability
The primary goal for using Domain Name System Security Extensions (DNSSEC) to sign records is integrity. DNSSEC is a set of extensions or enhancements to the Domain Name System (DNS) protocol, which is a protocol that translates or resolves the domain names or the hostnames into the IP addresses or the network addresses, and vice versa. DNSSEC is designed or intended to provide the security or the protection for the DNS protocol, by using the digital signatures or the cryptographic keys to sign or to verify the DNS records or the DNS data, such as the A records, the AAAA records, or the MX records. The primary goal for using DNSSEC to sign records is integrity, which means that DNSSEC aims to ensure or to confirm that the DNS records or the DNS data are authentic, accurate, or reliable, and that they have not been modified, altered, or corrupted by the third parties or the attackers who intercept or manipulate the DNS queries or the DNS responses over the network. DNSSEC can provide the integrity for the DNS records or the DNS data, by using the public key cryptography or the asymmetric cryptography to generate or to validate the digital signatures or the cryptographic keys that are attached or appended to the DNS records or the DNS data, and that can prove or demonstrate the origin, the identity, or the validity of the DNS records or the DNS data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 113; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 170
Which of the following is a reason to use manual patch installation instead of automated patch management?
The cost required to install patches will be reduced.
The time during which systems will remain vulnerable to an exploit will be decreased.
The likelihood of system or application incompatibilities will be decreased.
The ability to cover large geographic areas is increased.
Manual patch installation allows for thorough testing before deployment to ensure that the patch does not introduce new vulnerabilities or incompatibilities. Automated patch management can sometimes lead to unexpected issues if patches are not fully compatible with all systems and applications12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 452; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 863.
The MAIN reason an organization conducts a security authorization process is to
force the organization to make conscious risk decisions.
assure the effectiveness of security controls.
assure the correct security organization exists.
force the organization to enlist management support.
The main reason an organization conducts a security authorization process is to force the organization to make conscious risk decisions. A security authorization process is a process that evaluates and approves the security of an information system or a product before it is deployed or used. A security authorization process involves three steps: security categorization, security assessment, and security authorization. Security categorization is the step of determining the impact level of the information system or product on the confidentiality, integrity, and availability of the information and assets. Security assessment is the step of testing and verifying the security controls and measures implemented on the information system or product. Security authorization is the step of granting or denying the permission to operate or use the information system or product based on the security assessment results and the risk acceptance criteria. The security authorization process forces the organization to make conscious risk decisions, as it requires the organization to identify, analyze, and evaluate the risks associated with the information system or product, and to decide whether to accept, reject, mitigate, or transfer the risks. The other options are not the main reasons, but rather the benefits or outcomes of a security authorization process. Assuring the effectiveness of security controls is a benefit of a security authorization process, as it provides an objective and independent evaluation of the security controls and measures. Assuring the correct security organization exists is an outcome of a security authorization process, as it establishes the roles and responsibilities of the security personnel and stakeholders. Forcing the organization to enlist management support is an outcome of a security authorization process, as it involves the management in the risk decision making and approval process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, p. 419; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, p. 150.
Which of the following is a function of Security Assertion Markup Language (SAML)?
File allocation
Redundancy check
Extended validation
Policy enforcement
A function of Security Assertion Markup Language (SAML) is policy enforcement. SAML is an XML-based standard for exchanging authentication and authorization information between different entities, such as service providers and identity providers. SAML enables policy enforcement by allowing the service provider to specify the security requirements and conditions for accessing its resources, and allowing the identity provider to assert the identity and attributes of the user who requests access. The other options are not functions of SAML, but rather different concepts or technologies. File allocation is the process of assigning disk space to files. Redundancy check is a method of detecting errors in data transmission or storage. Extended validation is a type of certificate that provides a higher level of assurance for the identity of the website owner. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 283; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, p. 361.
The PRIMARY security concern for handheld devices is the
strength of the encryption algorithm.
spread of malware during synchronization.
ability to bypass the authentication mechanism.
strength of the Personal Identification Number (PIN).
The primary security concern for handheld devices is the spread of malware during synchronization. Handheld devices are often synchronized with other devices, such as desktops or laptops, to exchange data and update applications. This process can introduce malware from one device to another, or vice versa, if proper security controls are not in place.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 635; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 557
Which of the following is a weakness of Wired Equivalent Privacy (WEP)?
Length of Initialization Vector (IV)
Protection against message replay
Detection of message tampering
Built-in provision to rotate keys
According to the CISSP All-in-One Exam Guide2, a weakness of Wired Equivalent Privacy (WEP) is the length of the Initialization Vector (IV). WEP is a security protocol that was designed to provide confidentiality and integrity for wireless networks, by using the RC4 stream cipher to encrypt the data and the CRC-32 checksum to verify the data. However, WEP has several flaws that make it vulnerable to various attacks, such as the IV attack, the key recovery attack, the bit-flipping attack, and the replay attack. One of the flaws of WEP is the length of the IV, which is only 24 bits long. This means that the IV space is very small, and the IVs are likely to repeat after a short period of time, especially in a busy network. This allows an attacker to capture enough IVs and ciphertexts to perform a statistical analysis and recover the encryption key. WEP does not provide protection against message replay, detection of message tampering, or built-in provision to rotate keys, but these are not weaknesses of WEP, but rather limitations or features that WEP lacks. References: 2
What is the GREATEST challenge to identifying data leaks?
Available technical tools that enable user activity monitoring.
Documented asset classification policy and clear labeling of assets.
Senior management cooperation in investigating suspicious behavior.
Law enforcement participation to apprehend and interrogate suspects.
The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets. Data leaks are the unauthorized or accidental disclosure or exposure of sensitive or confidential data, such as personal information, trade secrets, or intellectual property. Data leaks can cause serious damage or harm to the data owner, such as reputation loss, legal liability, or competitive disadvantage. The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets, which means that the organization has defined and implemented the rules and guidelines for categorizing and marking the data according to their sensitivity, value, or criticality. Having a documented asset classification policy and clear labeling of assets can help to identify data leaks by enabling the detection, tracking, and reporting of the data movements, access, or usage, and by alerting the data owner, custodian, or user of any unauthorized or abnormal data activities or incidents. The other options are not the greatest challenges, but rather the benefits or enablers of identifying data leaks. Available technical tools that enable user activity monitoring are not the greatest challenges, but rather the benefits, of identifying data leaks, as they can provide the means or mechanisms for collecting, analyzing, and auditing the data actions or behaviors of the users or devices. Senior management cooperation in investigating suspicious behavior is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the support or authority for conducting the data leak investigation and taking the appropriate actions or measures. Law enforcement participation to apprehend and interrogate suspects is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the assistance or collaboration for pursuing and prosecuting the data leak perpetrators or offenders. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Why is it important that senior management clearly communicates the formal Maximum Tolerable Downtime (MTD) decision?
To provide each manager with precise direction on selecting an appropriate recovery alternative
To demonstrate to the regulatory bodies that the company takes business continuity seriously
To demonstrate to the board of directors that senior management is committed to continuity recovery efforts
To provide a formal declaration from senior management as required by internal audit to demonstrate sound business practices
The reason why it is important that senior management clearly communicates the formal Maximum Tolerable Downtime (MTD) decision is to provide each manager with precise direction on selecting an appropriate recovery alternative. MTD is a metric that defines the maximum amount of time that a system or a process can be unavailable or disrupted before causing unacceptable consequences or losses to the organization. MTD is determined by senior management based on the business impact analysis, the risk assessment, and the organizational objectives and policies. MTD is communicated to each manager as part of the disaster recovery plan (DRP), which is a plan that defines the procedures and actions to restore the critical systems or processes after a disaster or a disruption. MTD helps each manager to select an appropriate recovery alternative, which is a strategy or a solution that enables the recovery of the system or the process within the MTD. For example, if the MTD for a system is 24 hours, the manager may select a recovery alternative that involves a backup site or a cloud service that can be activated within 24 hours. If the MTD for a system is 4 hours, the manager may select a recovery alternative that involves a redundant or a mirrored site that can be switched within 4 hours. Therefore, it is important that senior management clearly communicates the formal MTD decision to provide each manager with precise direction on selecting an appropriate recovery alternative that meets the organizational needs and expectations. To demonstrate to the regulatory bodies that the company takes business continuity seriously, to demonstrate to the board of directors that senior management is committed to continuity recovery efforts, or to provide a formal declaration from senior management as required by internal audit to demonstrate sound business practices are not the reasons why it is important that senior management clearly communicates the formal MTD decision, as they are not related to the selection of an appropriate recovery alternative. These are possible benefits or outcomes of communicating the formal MTD decision, but they are not the main purpose or the reason for doing so. Communicating the formal MTD decision may help to demonstrate to the regulatory bodies that the company takes business continuity seriously, as it shows that the company has a clear and realistic metric for the recovery of the critical systems or processes. Communicating the formal MTD decision may help to demonstrate to the board of directors that senior management is committed to continuity recovery efforts, as it shows that senior management has a strategic and proactive role in the disaster recovery planning and implementation. Communicating the formal MTD decision may help to provide a formal declaration from senior management as required by internal audit to demonstrate sound business practices, as it shows that senior management has a documented and approved metric for the recovery of the critical systems or processes. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 19: Security Operations, page 1870.
At which phase of the software assurance life cycle should risks associated with software acquisition strategies be identified?
Follow-on phase
Planning phase
Monitoring and acceptance phase
Contracting phase
The planning phase of the software assurance life cycle is the stage where the objectives, requirements, scope, and constraints of the software project are defined and analyzed. This is the phase where the risks associated with software acquisition strategies should be identified, as this will help to select the most appropriate and secure software solution, as well as to plan for the mitigation and management of the risks. The follow-on phase is the stage where the software product is maintained and updated after its deployment. The monitoring and acceptance phase is the stage where the software product is tested and verified against the requirements and specifications. The contracting phase is the stage where the software product is procured and delivered by the vendor or supplier.
Which of the following practices provides the development team with a definition of
security and identification of threats in designing software?
Penetration testing
Stakeholder review
Threat modeling
Requirements review
Threat modeling is a practice that provides the development team with a definition of security and identification of threats in designing software. Threat modeling is a process of analyzing the software system or application from the perspective of an attacker, and identifying the potential threats, vulnerabilities, and risks that may affect the security of the software system or application. Threat modeling can help to improve the security awareness and mindset of the development team, as well as to guide the security design and implementation decisions of the software system or application. Penetration testing, stakeholder review, or requirements review are not the best practices to provide the development team with a definition of security and identification of threats in designing software, as they are more related to the testing, evaluation, or specification aspects of software development. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, page 1165; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.7, page 303.
Which of the following types of devices can provide content filtering and threat protection, and manage multiple IPSec site-to-site connections?
Layer 3 switch
VPN headend
Next-generation firewall
Proxy server
Intrusion prevention
A next-generation firewall (NGFW) is a type of device that can provide content filtering and threat protection, and manage multiple IPSec site-to-site connections. A NGFW can inspect and block malicious or unwanted traffic based on application, user, or content level. A NGFW can also establish and maintain secure tunnels between different networks using IPSec, a protocol that encrypts and authenticates the data packets. A NGFW is the best option to provide the functionality of content filtering, threat protection, and IPSec site-to-site connections. The other options are either not devices, or do not provide all the functionality required. References: [CISSP - Certified Information Systems Security Professional], Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.2 Transmission methods; [CISSP Exam Outline], Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.2 Transmission methods
A Distributed Denial of Service (DDoS) attack was carried out using malware called Mirai to create a large-scale command and control system to launch a botnet. Which of the following
devices were the PRIMARY sources used to generate the attack traffic?
Internet of Things (IoT) devices
Microsoft Windows hosts
Web servers running open source operating systems (OS)
Mobile devices running Android
The primary sources used to generate the attack traffic in the DDoS attack using Mirai malware were IoT devices. A DDoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a service, by overwhelming or flooding it with a large amount of traffic or requests from multiple sources. A DDoS attack can cause the system or service to slow down, crash, or become inaccessible for the legitimate users or customers. Mirai is a malware that infects and hijacks IoT devices, such as cameras, routers, or printers, and turns them into a botnet, which is a network of compromised devices that are controlled by a central command and control server. Mirai malware scans the internet for vulnerable IoT devices that use default or weak credentials, and infects them with malicious code that allows the attacker to remotely control them. Mirai malware was used to launch a massive DDoS attack in 2016, targeting several high-profile websites and services, such as Twitter, Netflix, or Amazon, and causing widespread internet disruption. IoT devices were the primary sources used to generate the attack traffic in the DDoS attack using Mirai malware, because:
Which of the following technologies can be used to monitor and dynamically respond to potential threats on web applications?
Security Assertion Markup Language (SAML)
Web application vulnerability scanners
Runtime application self-protection (RASP)
Field-level tokenization
Runtime application self-protection (RASP) is a technology that can be used to monitor and dynamically respond to potential threats on web applications. RASP is a software component that is integrated into the web application or the runtime environment, and it analyzes the behavior and the context of the application and the requests. RASP can detect and prevent attacks such as SQL injection, cross-site scripting, or buffer overflow, by blocking or modifying the malicious requests or responses. RASP can also provide alerts and logs for the security team or the developers. The other options are not correct. Security Assertion Markup Language (SAML) is a standard that enables single sign-on (SSO) and federated identity management for web applications, but it does not monitor or respond to threats. Web application vulnerability scanners are tools that scan web applications for common vulnerabilities and misconfigurations, but they do not provide real-time protection or response. Field-level tokenization is a technique that replaces sensitive data fields with random tokens, and it can reduce the exposure or the impact of a data breach, but it does not monitor or respond to threats. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Architecture and Engineering, page 512. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Security Architecture and Engineering, page 513.
A large human resources organization wants to integrate their identity management with a trusted partner organization. The human resources organization wants to maintain the creation and management of the identities and may want to share with other partners in the future. Which of the following options BEST serves their needs?
Federated identity
Cloud Active Directory (AD)
Security Assertion Markup Language (SAML)
Single sign-on (SSO)
Federated identity is a mechanism that allows users to use a single identity across multiple systems or organizations, without requiring the creation or management of separate accounts for each system or organization. Federated identity relies on trust relationships between the identity providers (IdPs) and the service providers (SPs) that participate in the federation. The IdPs are responsible for authenticating the users and issuing security tokens that contain identity attributes or claims. The SPs are responsible for validating the security tokens and granting access to the users based on the identity attributes or claims. Federated identity enables users to have a seamless and consistent user experience, while reducing the administrative overhead and security risks associated with multiple accounts. Federated identity also supports the principle of data minimization, as the IdPs only share the necessary identity attributes or claims with the SPs, and the SPs do not store any user identity information. Federated identity is often implemented using standards such as Security Assertion Markup Language (SAML), OpenID Connect, or OAuth. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 295. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), page 609.
Which of the following is applicable to a publicly held company concerned about information handling and storage requirement specific to the financial reporting?
Privacy Act of 1974
Clinger-Cohan Act of 1996
Sarbanes-Oxley (SOX) Act of 2002
International Organization for Standardization (ISO) 27001
The Sarbanes-Oxley (SOX) Act of 2002 is applicable to a publicly held company concerned about information handling and storage requirements specific to the financial reporting. SOX is a federal law that aims to protect investors from fraudulent accounting activities by corporations. SOX requires public companies to establish and maintain internal controls over their financial reporting processes, and to have their financial statements audited by an independent auditor. SOX also mandates that public companies retain their financial records and related audit documents for at least five years, and that they implement proper security measures to protect the confidentiality, integrity, and availability of their financial information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 19. CISSP Practice Exam | Boson, Question 8.
Which of the following technologies would provide the BEST alternative to anti-malware software?
Host-based Intrusion Detection Systems (HIDS)
Application whitelisting
Host-based firewalls
Application sandboxing
The technology that would provide the best alternative to anti-malware software is application whitelisting. Anti-malware software is a software program that detects, prevents, and removes malware, such as viruses, worms, trojans, ransomware, or spyware, from a computer or a network. Anti-malware software usually relies on signature-based detection, which means that it compares the files or processes on the computer or the network with a database of known malware signatures, and blocks or deletes the files or processes that match the signatures. However, anti-malware software has some limitations and drawbacks, such as being unable to detect new or unknown malware, being vulnerable to evasion or tampering techniques, consuming system resources and bandwidth, or requiring frequent updates and maintenance. Application whitelisting is a technology that allows only authorized or trusted applications to run on a computer or a network, and blocks or denies all other applications. Application whitelisting can provide a better alternative to anti-malware software, as it can prevent malware from executing or infecting the computer or the network, regardless of whether the malware is known or unknown, or whether it uses evasion or tampering techniques. Application whitelisting can also improve the performance and stability of the computer or the network, as it reduces the system overhead and the network traffic. However, application whitelisting also has some challenges and risks, such as being difficult to implement and manage, being incompatible with some applications or systems, or being susceptible to bypass or exploitation methods. Host-based Intrusion Detection Systems (HIDS), host-based firewalls, and application sandboxing are not the best alternatives to anti-malware software, as they are either not as effective or not as efficient as application whitelisting, or they serve different purposes or functions than anti-malware software. References:
Which of the following was developed to support multiple protocols as well as provide as well as provide login, password, and error correction capabilities?
Challenge Handshake Authentication Protocol (CHAP)
Point-to-Point Protocol (PPP)
Password Authentication Protocol (PAP)
Post Office Protocol (POP)
Point-to-Point Protocol (PPP) is the protocol that was developed to support multiple protocols as well as provide login, password, and error correction capabilities. PPP is a data link layer protocol that is used to establish a direct connection between two nodes over a serial link, such as a phone line, cable, or fiber. PPP can support multiple network layer protocols, such as IP, IPX, or AppleTalk, by using the Network Control Protocol (NCP) for each protocol. PPP can also provide authentication, encryption, and compression features, by using the Link Control Protocol (LCP) and its extensions, such as Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), or Microsoft Challenge Handshake Authentication Protocol (MS-CHAP). PPP can also detect and correct errors on the link, by using the Frame Check Sequence (FCS) field in the PPP frame. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 177; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 251]
The personal laptop of an organization executive is stolen from the office, complete with personnel and project records. Which of the following should be done FIRST to mitigate future occurrences?
Encrypt disks on personal laptops.
Issue cable locks for use on personal laptops.
Create policies addressing critical information on personal laptops.
Monitor personal laptops for critical information.
The first step to mitigate future occurrences of personal laptops being stolen from the office with critical information is to create policies addressing this issue. Policies are high-level statements that define the goals and objectives of an organization and provide guidance for decision making. Policies can specify the roles and responsibilities of the users, the acceptable use of personal laptops, the security controls and requirements for protecting critical information, the reporting and response procedures in case of theft or loss, and the sanctions for non-compliance. The other options are possible actions to implement the policies, but they are not the first step. Encrypting disks, issuing cable locks, and monitoring personal laptops are examples of technical, physical, and administrative controls, respectively, that can help prevent or detect unauthorized access to critical information on personal laptops. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 51-52; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 29-30.
Which of the following protocols will allow the encrypted transfer of content on the Internet?
Server Message Block (SMB)
Secure copy
Hypertext Transfer Protocol (HTTP)
Remote copy
Secure copy (SCP) is a protocol that allows the encrypted transfer of content on the Internet. SCP uses Secure Shell (SSH) to provide authentication and encryption for the data transfer. SCP can be used to copy files between local and remote hosts, or between two remote hosts. References: Unable to provide specific references due to browsing limitations.
What type of risk is related to the sequences of value-adding and managerial activities undertaken in an organization?
Demand risk
Process risk
Control risk
Supply risk
The type of risk that is related to the sequences of value-adding and managerial activities undertaken in an organization is process risk. Process risk is the risk that arises from the inefficiency, inadequacy, or failure of the processes that are performed by the organization to achieve its objectives and deliver its products or services. Process risk can affect the quality, performance, and security of the organization’s outputs and outcomes, and can result in financial, operational, or reputational losses or damages. Process risk can be caused by various factors, such as human errors, system errors, design flaws, compliance issues, or external events. Process risk can be managed by implementing and monitoring the process controls, such as policies, procedures, standards, or metrics, that ensure the effectiveness, efficiency, and reliability of the processes34. References: CISSP CBK, Fifth Edition, Chapter 2, page 122; 2024 Pass4itsure CISSP Dumps, Question 8.
Which of the following is the FIRST requirement a data owner should consider before implementing a data retention policy?
Training
Legal
Business
Storage
The first requirement a data owner should consider before implementing a data retention policy is the legal requirement. A data retention policy is a document that defines the rules and procedures for retaining, storing, and disposing of data, based on its type, value, and purpose. A data owner is a person or an entity that has the authority and responsibility for the creation, classification, and management of data. A data owner should consider the legal requirement before implementing a data retention policy, as there may be laws, regulations, or contracts that mandate the minimum or maximum retention periods for certain types of data, as well as the methods and standards for data preservation and destruction. A data owner should also consider the business, storage, and training requirements for implementing a data retention policy, but these are not the first or the most important factors to consider.
A cybersecurity engineer has been tasked to research and implement an ultra-secure communications channel to protect the organization's most valuable intellectual property (IP). The primary directive in this initiative is to ensure there Is no possible way the communications can be intercepted without detection. Which of the following Is the only way to ensure this
‘outcome?
Diffie-Hellman key exchange
Symmetric key cryptography
[Public key infrastructure (PKI)
Quantum Key Distribution
The only way to ensure an ultra-secure communications channel that cannot be intercepted without detection is to use Quantum Key Distribution (QKD). QKD is a technique that uses the principles of quantum mechanics to generate and exchange cryptographic keys between two parties. QKD relies on the properties of quantum particles, such as photons or electrons, to encode and transmit the keys. QKD offers the following advantages for securing communications:
Which of the following determines how traffic should flow based on the status of the infrastructure layer?
Traffic plane
Application plane
Data plane
Control plane
The control plane is the part of a network that determines how traffic should flow based on the status of the infrastructure layer. The control plane is responsible for the configuration and management of the network devices, such as routers, switches, or firewalls, and the routing protocols, such as OSPF, BGP, or RIP, that control the path selection and forwarding of the network traffic. The control plane communicates with the data plane and the management plane to ensure the optimal and secure operation of the network. The data plane is the part of a network that carries the user or application data from the source to the destination. The data plane is responsible for the processing and forwarding of the network packets, such as IP, TCP, or UDP, that encapsulate the data. The data plane communicates with the control plane to receive the routing and forwarding instructions. The management plane is the part of a network that monitors and controls the network devices and their performance. The management plane is responsible for the administration and maintenance of the network devices, such as configuration, backup, update, or troubleshooting, and the network services, such as SNMP, SSH, or Telnet, that enable the remote access and management of the network devices. The management plane communicates with the control plane and the data plane to collect and analyze the network information and statistics. The traffic plane is not a part of a network, but rather a term that refers to the network traffic itself, or the data that flows through the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 252.
Security Software Development Life Cycle (SDLC) expects application code to be written In a consistent manner to allow ease of auditing and which of the following?
Protecting
Executing
Copying
Enhancing
The Security Software Development Life Cycle (SDLC) is a framework that guides the development of secure software applications. It integrates security principles and practices throughout the entire software development process, from planning and analysis to design, implementation, testing, deployment, and maintenance. One of the expectations of the Security SDLC is that the application code should be written in a consistent manner to allow ease of auditing and enhancing. Auditing is the process of reviewing and verifying the code for compliance, quality, and security. Enhancing is the process of improving and modifying the code to meet changing requirements, fix bugs, or add new features. Writing code in a consistent manner helps to facilitate auditing and enhancing by making the code more readable, understandable, and maintainable. References: CISSP - Certified Information Systems Security Professional, Domain 8. Software Development Security, 8.1 Understand and integrate security in the Software Development Life Cycle (SDLC), 8.1.1 Identify and apply security controls in development environments, 8.1.1.2 Security of the software environments; CISSP Exam Outline, Domain 8. Software Development Security, 8.1 Understand and integrate security in the Software Development Life Cycle (SDLC), 8.1.1 Identify and apply security controls in development environments, 8.1.1.2 Security of the software environments
Which of the following is required to verify the authenticity of a digitally signed document?
Digital hash of the signed document
Sender's private key
Recipient's public key
Agreed upon shared secret
A digital signature is a cryptographic technique that provides integrity, authenticity, and non-repudiation for a document. A digital signature is created by applying a hash function to the document and then encrypting the hash value with the sender’s private key. To verify the authenticity of a digitally signed document, the recipient needs to decrypt the signature with the sender’s public key, which can be obtained from a trusted source, such as a digital certificate. The recipient also needs to apply the same hash function to the document and compare the resulting hash value with the decrypted signature. If they match, the document is authentic and has not been altered. The digital hash of the signed document, the sender’s private key, and the agreed upon shared secret are not required for verification, and may not be available or secure. References: CISSP Official Study Guide, 9th Edition, page 91; CISSP All-in-One Exam Guide, 8th Edition, page 103
Which of the following would qualify as an exception to the "right to be forgotten" of the General Data Protection Regulation's (GDPR)?
For the establishment, exercise, or defense of legal claims
The personal data has been lawfully processed and collected
The personal data remains necessary to the purpose for which it was collected
For the reasons of private interest
The right to be forgotten is a principle of the GDPR that grants data subjects the right to request the erasure of their personal data from a data controller under certain conditions. However, there are some exceptions to this right, where the data controller can refuse to erase the personal data if it is necessary for a legitimate purpose. One of the exceptions is for the establishment, exercise, or defense of legal claims, where the personal data is required for the data controller to assert, pursue, or protect its legal rights or obligations. The other options are not valid exceptions to the right to be forgotten. The personal data has been lawfully processed and collected is not an exception, as the data subject can still request the erasure of their personal data if they withdraw their consent, object to the processing, or the data is no longer necessary for the original purpose. The personal data remains necessary to the purpose for which it was collected is not an exception, as the data subject can still request the erasure of their personal data if the purpose is incompatible with their interests, rights, or freedoms. For the reasons of private interest is not an exception, as the data controller cannot override the data subject’s right to be forgotten based on its own personal or commercial interests. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 61.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following are important criteria when designing procedures and acceptance criteria for acquired software?
Code quality, security, and origin
Architecture, hardware, and firmware
Data quality, provenance, and scaling
Distributed, agile, and bench testing
Code quality, security, and origin are important criteria when designing procedures and acceptance criteria for acquired software. Code quality refers to the degree to which the software meets the functional and nonfunctional requirements, as well as the standards and best practices for coding. Security refers to the degree to which the software protects the confidentiality, integrity, and availability of the data and the system. Origin refers to the source and ownership of the software, as well as the licensing and warranty terms. Architecture, hardware, and firmware are not criteria for acquired software, but for the system that hosts the software. Data quality, provenance, and scaling are not criteria for acquired software, but for the data that the software processes. Distributed, agile, and bench testing are not criteria for acquired software, but for the software development and testing methods. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 947; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 869.
A control to protect from a Denial-of-Service (DoS) attach has been determined to stop 50% of attacks, and additionally reduces the impact of an attack by 50%. What is the residual risk?
25%
50%
75%
100%
The residual risk is 25% in this scenario. Residual risk is the portion of risk that remains after security measures have been applied to mitigate the risk. Residual risk can be calculated by subtracting the risk reduction from the total risk. In this scenario, the total risk is 100%, and the risk reduction is 75%. The risk reduction is 75% because the control stops 50% of attacks, and reduces the impact of an attack by 50%. Therefore, the residual risk is 100% - 75% = 25%. Alternatively, the residual risk can be calculated by multiplying the probability and the impact of the remaining risk. In this scenario, the probability of an attack is 50%, and the impact of an attack is 50%. Therefore, the residual risk is 50% x 50% = 25%. 50%, 75%, and 100% are not the correct answers to the question, as they do not reflect the correct calculation of the residual risk.
Which of the following combinations would MOST negatively affect availability?
Denial of Service (DoS) attacks and outdated hardware
Unauthorized transactions and outdated hardware
Fire and accidental changes to data
Unauthorized transactions and denial of service attacks
The combination that would most negatively affect availability is denial of service (DoS) attacks and outdated hardware. Availability is the property or the condition of a system or a network to be accessible and usable by the authorized users or customers, whenever and wherever they need it. Availability can be measured by various metrics, such as uptime, downtime, response time, or reliability. Availability can be affected by various factors, such as hardware, software, network, human, or environmental factors. Denial of service (DoS) attacks and outdated hardware are two factors that can negatively affect availability, as they can cause or contribute to the following consequences:
The combination of denial of service (DoS) attacks and outdated hardware would most negatively affect availability, as they can have a synergistic or a cumulative effect on the system or the network, and they can exacerbate or amplify each other’s impact. For example, denial of service (DoS) attacks can exploit or target the vulnerabilities or the weaknesses of the outdated hardware, and they can cause more damage or disruption to the system or the network. Outdated hardware can increase or prolong the susceptibility or the recovery of the system or the network to the denial of service (DoS) attacks, and they can reduce or hinder the resilience or the mitigation of the system or the network to the denial of service (DoS) attacks. Unauthorized transactions and outdated hardware, fire and accidental changes to data, and unauthorized transactions and denial of service attacks are not the combinations that would most negatively affect availability, although they may be related or possible combinations. Unauthorized transactions and outdated hardware are two factors that can negatively affect the confidentiality and the integrity of the data, rather than the availability of the system or the network, as they can cause or contribute to the following consequences:
Fire and accidental changes to data are two factors that can negatively affect the availability and the integrity of the data, rather than the availability of the system or the network, as they can cause or contribute to the following consequences:
Unauthorized transactions and denial of service attacks are two factors that can negatively affect the confidentiality and the availability of the system or the network, rather than the availability of the system or the network, as they can cause or contribute to the following consequences:
An Information Technology (IT) professional attends a cybersecurity seminar on current incident response methodologies.
What code of ethics canon is being observed?
Provide diligent and competent service to principals
Protect society, the commonwealth, and the infrastructure
Advance and protect the profession
Act honorable, honesty, justly, responsibly, and legally
Attending a cybersecurity seminar to learn about current incident response methodologies aligns with the ethical canon of advancing and protecting the profession. It involves enhancing one’s knowledge and skills, contributing to the growth and integrity of the field, and staying abreast of the latest developments and best practices in information security. References: ISC² Code of Ethics
After following the processes defined within the change management plan, a super user has upgraded a
device within an Information system.
What step would be taken to ensure that the upgrade did NOT affect the network security posture?
Conduct an Assessment and Authorization (A&A)
Conduct a security impact analysis
Review the results of the most recent vulnerability scan
Conduct a gap analysis with the baseline configuration
A security impact analysis is a process of assessing the potential effects of a change on the security posture of a system. It helps to identify and mitigate any security risks that may arise from the change, such as new vulnerabilities, configuration errors, or compliance issues. A security impact analysis should be conducted after following the change management plan and before implementing the change in the production environment. Conducting an A&A, reviewing the results of a vulnerability scan, or conducting a gap analysis with the baseline configuration are also possible steps to ensure the security of a system, but they are not specific to the change management process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 961; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 8: Security Operations, page 1013.
In an organization where Network Access Control (NAC) has been deployed, a device trying to connect to the network is being placed into an isolated domain. What could be done on this device in order to obtain proper
connectivity?
Connect the device to another network jack
Apply remediation’s according to security requirements
Apply Operating System (OS) patches
Change the Message Authentication Code (MAC) address of the network interface
Network Access Control (NAC) is a technology that enforces security policies and controls on the devices that attempt to access a network. NAC can verify the identity and compliance of the devices, and grant or deny access based on predefined rules and criteria. NAC can also place the devices into different domains or segments, depending on their security posture and role. One of the domains that NAC can create is the isolated domain, which is a restricted network segment that isolates the devices that do not meet the security requirements or pose a potential threat to the network. The devices in the isolated domain have limited or no access to the network resources, and are subject to remediation actions. Remediation is the process of fixing or improving the security status of the devices, by applying the necessary updates, patches, configurations, or software. Remediation can be performed automatically by the NAC system, or manually by the device owner or administrator. Therefore, the best thing that can be done on a device that is placed into an isolated domain by NAC is to apply remediation’s according to the security requirements, which can restore the device’s compliance and enable it to access the network normally.
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of which phase?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The security accreditation task of the System Development Life Cycle (SDLC) process is completed at the end of the system implementation phase. The SDLC is a framework that describes the stages and activities involved in the development, deployment, and maintenance of a system. The SDLC typically consists of the following phases: system initiation, system acquisition and development, system implementation, system operations and maintenance, and system disposal. The security accreditation task is the process of formally authorizing a system to operate in a specific environment, based on the security requirements, controls, and risks. The security accreditation task is part of the security certification and accreditation (C&A) process, which also includes the security certification task, which is the process of technically evaluating and testing the security controls and functionality of a system. The security accreditation task is completed at the end of the system implementation phase, which is the phase where the system is installed, configured, integrated, and tested in the target environment. The security accreditation task involves reviewing the security certification results and documentation, such as the security plan, the security assessment report, and the plan of action and milestones, and making a risk-based decision to grant, deny, or conditionally grant the authorization to operate (ATO) the system. The security accreditation task is usually performed by a senior official, such as the authorizing official (AO) or the designated approving authority (DAA), who has the authority and responsibility to accept the security risks and approve the system operation. The security accreditation task is not completed at the end of the system acquisition and development, system operations and maintenance, or system initiation phases. The system acquisition and development phase is the phase where the system requirements, design, and development are defined and executed, and the security controls are selected and implemented. The system operations and maintenance phase is the phase where the system is used and supported in the operational environment, and the security controls are monitored and updated. The system initiation phase is the phase where the system concept, scope, and objectives are established, and the security categorization and planning are performed.
Which of the following could be considered the MOST significant security challenge when adopting DevOps practices compared to a more traditional control framework?
Achieving Service Level Agreements (SLA) on how quickly patches will be released when a security flaw is found.
Maintaining segregation of duties.
Standardized configurations for logging, alerting, and security metrics.
Availability of security teams at the end of design process to perform last-minute manual audits and reviews.
The most significant security challenge when adopting DevOps practices compared to a more traditional control framework is maintaining segregation of duties. DevOps is a set of practices and methodologies that aim to integrate and automate the development and the operations of a system or a network, such as software, applications, or services, to enhance the quality and the speed of the delivery and the deployment of the system or the network. DevOps can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. DevOps can involve various tools and techniques, such as continuous integration, continuous delivery, continuous testing, continuous monitoring, or continuous feedback. A traditional control framework is a set of policies and procedures that aim to establish and enforce the security and the governance of a system or a network, such as software, applications, or services, to protect the confidentiality, the integrity, and the availability of the system or the network. A traditional control framework can provide some benefits for security, such as enhancing the visibility and the accountability of the system or the network, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A traditional control framework can involve various controls and mechanisms, such as risk assessment, change management, configuration management, access control, or audit trail. Maintaining segregation of duties is the most significant security challenge when adopting DevOps practices compared to a more traditional control framework, as it can be difficult and costly to implement and manage, due to the differences and the conflicts between the DevOps and the traditional control framework principles and objectives. Segregation of duties is a security principle or a technique that requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a process or a task, such as development, testing, deployment, or maintenance. Segregation of duties can provide some benefits for security, such as enhancing the accuracy and the reliability of the process or the task, preventing or detecting fraud or errors, and supporting the audit and the compliance activities.
Which of the following is the MOST common method of memory protection?
Compartmentalization
Segmentation
Error correction
Virtual Local Area Network (VLAN) tagging
The most common method of memory protection is segmentation. Segmentation is a technique that divides the memory space into logical segments, such as code, data, stack, and heap. Each segment has its own attributes, such as size, location, access rights, and protection level. Segmentation can help to isolate and protect the memory segments from unauthorized or unintended access, modification, or execution, as well as to prevent memory corruption, overflow, or leakage. Compartmentalization, error correction, and VLAN tagging are not methods of memory protection, but of information protection, data protection, and network protection, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 589; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 370.
At a MINIMUM, audits of permissions to individual or group accounts should be scheduled
annually
to correspond with staff promotions
to correspond with terminations
continually
The minimum frequency for audits of permissions to individual or group accounts is continually. Audits of permissions are the processes of reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. Audits of permissions can provide some benefits for security, such as enhancing the accuracy and the reliability of the user accounts and access rights, identifying and removing any excessive, obsolete, or unauthorized access rights, and supporting the audit and the compliance activities. Audits of permissions should be performed continually, which means that they should be conducted on a regular and consistent basis, without any interruption or delay. Continual audits of permissions can help to maintain the security and the integrity of the system or the network, by detecting and addressing any changes or issues that may affect the user accounts and access rights, such as role changes, transfers, promotions, or terminations. Continual audits of permissions can also help to ensure the effectiveness and the feasibility of the audit process, by reducing the workload and the complexity of the audit tasks, and by providing timely and relevant feedback and results. Annually, to correspond with staff promotions, and to correspond with terminations are not the minimum frequencies for audits of permissions to individual or group accounts, although they may be related or possible frequencies. Annually means that the audits of permissions are performed once a year, which may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated more frequently than that, due to various factors, such as role changes, transfers, promotions, or terminations. Annually audits of permissions may also increase the workload and the complexity of the audit process, as they may involve a large number of user accounts and access rights to review and verify, and they may not provide timely and relevant feedback and results. To correspond with staff promotions means that the audits of permissions are performed whenever a staff member is promoted to a higher or a different position within the organization, which may affect their user accounts and access rights. To correspond with staff promotions audits of permissions can help to ensure that the user accounts and access rights are aligned with the current roles or functions of the staff members, and that they follow the principle of least privilege. However, to correspond with staff promotions audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or terminations, and they may not be performed on a regular and consistent basis. To correspond with terminations means that the audits of permissions are performed whenever a staff member leaves the organization, which may affect their user accounts and access rights. To correspond with terminations audits of permissions can help to ensure that the user accounts and access rights are revoked or removed from the system or the network, and that they prevent any unauthorized or improper access or use. However, to correspond with terminations audits of permissions may not be sufficient or adequate to maintain the security and the integrity of the system or the network, as the user accounts and access rights may change or become outdated due to other factors, such as role changes, transfers, or promotions, and they may not be performed on a regular and consistent basis.
An organization has discovered that users are visiting unauthorized websites using anonymous proxies.
Which of the following is the BEST way to prevent future occurrences?
Remove the anonymity from the proxy
Analyze Internet Protocol (IP) traffic for proxy requests
Disable the proxy server on the firewall
Block the Internet Protocol (IP) address of known anonymous proxies
Anonymous proxies are servers that act as intermediaries between the user and the internet, hiding the user’s real IP address and allowing them to bypass network restrictions and access unauthorized websites. The best way to prevent users from visiting unauthorized websites using anonymous proxies is to block the IP address of known anonymous proxies on the firewall or router. This will prevent the user from establishing a connection with the proxy server and accessing the blocked content. Removing the anonymity from the proxy, analyzing IP traffic for proxy requests, or disabling the proxy server on the firewall are not effective ways to prevent future occurrences, as they do not address the root cause of the problem or require more resources and time to implement. References: The 17 Best Proxy Sites to Help You Browse Anonymously; Buy HTTP proxies and Socks5 | Anonymous Proxies; The Best Free Proxy Server List: Tested & Working! (2024).
What is the correct order of steps in an information security assessment?
Place the information security assessment steps on the left next to the numbered boxes on the right in the
correct order.
The correct order of steps in an information security assessment is:
Comprehensive Explanation: An information security assessment is a process of evaluating the security posture of a system, network, or organization. It involves four main steps:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Assessment and Testing, page 853; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 6: Security Assessment and Testing, page 791.
A user has infected a computer with malware by connecting a Universal Serial Bus (USB) storage device.
Which of the following is MOST effective to mitigate future infections?
Develop a written organizational policy prohibiting unauthorized USB devices
Train users on the dangers of transferring data in USB devices
Implement centralized technical control of USB port connections
Encrypt removable USB devices containing data at rest
The most effective method to mitigate future infections caused by connecting a Universal Serial Bus (USB) storage device is to implement centralized technical control of USB port connections. USB port connections are the physical interfaces that allow USB devices, such as flash drives, keyboards, or mice, to connect to a computer or a network. USB port connections can pose a security risk, as they can be used to introduce or spread malware, to steal or leak data, or to bypass other security controls. Centralized technical control of USB port connections is a technique that uses a centralized system or a policy to monitor, restrict, or disable the USB port connections on the computers or the network. Centralized technical control of USB port connections can prevent or limit future infections caused by connecting a USB storage device, as it can block or allow the USB devices based on various criteria, such as the device type, the device ID, the user ID, the time, or the location. Centralized technical control of USB port connections can also provide some benefits for web security, such as enhancing the visibility and the auditability of the USB activities, enforcing the compliance and the consistency of the USB policies, and reducing the reliance and the burden on the end users. Develop a written organizational policy prohibiting unauthorized USB devices, train users on the dangers of transferring data in USB devices, and encrypt removable USB devices containing data at rest are not the most effective methods to mitigate future infections caused by connecting a USB storage device, although they may be related or useful techniques. Develop a written organizational policy prohibiting unauthorized USB devices is a technique that uses a formal document to define and communicate the rules and the expectations regarding the usage of USB devices on the computers or the network. Develop a written organizational policy prohibiting unauthorized USB devices can provide some benefits for web security, such as raising the awareness and the responsibility of the parties, establishing the standards and the guidelines for the USB activities, and providing the basis and the justification for the enforcement and the sanctions of the USB policies. However, develop a written organizational policy prohibiting unauthorized USB devices is not sufficient to prevent or limit future infections caused by connecting a USB storage device, as the policy may not be effectively implemented, communicated, or followed by the parties, and it may not be able to address the dynamic and the complex nature of the USB threats. Train users on the dangers of transferring data in USB devices is a technique that uses education and awareness programs to inform and instruct the users about the risks and the best practices of using USB devices on the computers or the network. Train users on the dangers of transferring data in USB devices can provide some benefits for web security, such as improving the knowledge and the skills of the users, changing the attitudes and the behaviors of the users, and empowering the users to make informed and secure decisions regarding the USB activities.
What is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique?
Purging
Encryption
Destruction
Clearing
Purging is the process of removing sensitive data from a system or storage device with the intent that the data cannot be reconstructed by any known technique. Purging is also known as sanitization, erasure, or wiping, and it is a security measure to prevent unauthorized access, disclosure, or misuse of the data. Purging can be performed by using software tools or physical methods that overwrite, degauss, or destroy the data and the storage media. Purging is required when the system or storage device is decommissioned, disposed, transferred, or reused, and the data is no longer needed or has a high level of sensitivity or classification. Encryption, destruction, and clearing are not the same as purging, although they may be related or complementary processes. Encryption is the process of transforming data into an unreadable form by using a secret key or algorithm. Encryption can protect the data from unauthorized access or disclosure, but it does not remove the data from the system or storage device. The encrypted data can still be recovered if the key or algorithm is compromised or broken. Destruction is the process of physically damaging or disintegrating the system or storage device to the point that it is unusable and irreparable. Destruction can prevent the data from being reconstructed, but it may not be feasible, cost-effective, or environmentally friendly. Clearing is the process of removing data from a system or storage device by using logical techniques, such as overwriting or deleting. Clearing can protect the data from unauthorized access by normal means, but it does not prevent the data from being reconstructed by using advanced techniques, such as forensic analysis or data recovery tools.
Mandatory Access Controls (MAC) are based on:
security classification and security clearance
data segmentation and data classification
data labels and user access permissions
user roles and data encryption
Mandatory Access Controls (MAC) are based on security classification and security clearance. MAC is a type of access control model that assigns permissions to subjects and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the subjects. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The subjects are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The subjects can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the subjects that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a subject can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. Data segmentation and data classification, data labels and user access permissions, and user roles and data encryption are not the bases of MAC, although they may be related or useful concepts or techniques. Data segmentation and data classification are techniques that involve dividing and organizing the data into smaller and more manageable units, and assigning them different categories or levels based on their characteristics or requirements, such as their type, their value, their sensitivity, or their usage. Data segmentation and data classification can provide some benefits for security, such as enhancing the visibility and the control of the data, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, data segmentation and data classification are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as discretionary access control (DAC) or role-based access control (RBAC). Data labels and user access permissions are concepts that involve attaching metadata or tags to the data and the users, and specifying the rules or the criteria for accessing the data and the users. Data labels and user access permissions can provide some benefits for security, such as enhancing the identification and the authentication of the data and the users, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, data labels and user access permissions are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as DAC or RBAC. User roles and data encryption are techniques that involve defining and assigning the functions or the responsibilities of the users, and transforming the data into an unreadable form that can only be accessed by authorized parties who possess the correct key. User roles and data encryption can provide some benefits for security, such as enhancing the authorization and the confidentiality of the data and the users, facilitating the implementation and the enforcement of the security policies and controls, and supporting the audit and compliance activities. However, user roles and data encryption are not the bases of MAC, as they are not the same as security classification and security clearance, and they can be used with other access control models, such as DAC or RBAC.
An organization recently conducted a review of the security of its network applications. One of the
vulnerabilities found was that the session key used in encrypting sensitive information to a third party server had been hard-coded in the client and server applications. Which of the following would be MOST effective in mitigating this vulnerability?
Diffle-Hellman (DH) algorithm
Elliptic Curve Cryptography (ECC) algorithm
Digital Signature algorithm (DSA)
Rivest-Shamir-Adleman (RSA) algorithm
The most effective method of mitigating the vulnerability of hard-coded session keys is to use the Diffle-Hellman (DH) algorithm. The DH algorithm is a key exchange protocol that allows two parties to establish a shared secret key over an insecure channel, without revealing the key to anyone else. The DH algorithm uses the mathematical properties of modular arithmetic and discrete logarithms to generate the key. The DH algorithm can be used to create a session key for each communication session, instead of using a hard-coded key that is fixed and static. This can prevent an attacker from extracting the key from the client or server applications, or from intercepting the key during the transmission. The DH algorithm can also provide forward secrecy, which means that the compromise of one session key does not affect the security of the previous or future session keys. Elliptic Curve Cryptography (ECC) algorithm, Digital Signature algorithm (DSA), and Rivest-Shamir-Adleman (RSA) algorithm are not the most effective methods of mitigating the vulnerability of hard-coded session keys, although they may be related or useful cryptographic techniques. ECC algorithm is a type of public key cryptography that uses the mathematical properties of elliptic curves to generate public and private keys. ECC algorithm can provide the same level of security as other public key algorithms, such as RSA, but with smaller key sizes and faster computations. ECC algorithm can be used for key exchange, encryption, or digital signatures, but it does not directly address the issue of hard-coded session keys. DSA algorithm is a type of public key cryptography that is used for digital signatures. DSA algorithm uses the mathematical properties of modular arithmetic and discrete logarithms to generate public and private keys, and to sign and verify messages. DSA algorithm can provide authentication, integrity, and non-repudiation, but it does not provide encryption or key exchange, and it does not directly address the issue of hard-coded session keys. RSA algorithm is a type of public key cryptography that is used for encryption, decryption, or digital signatures. RSA algorithm uses the mathematical properties of prime numbers and modular arithmetic to generate public and private keys, and to encrypt and decrypt messages, or to sign and verify messages. RSA algorithm can provide confidentiality, authentication, integrity, and non-repudiation, but it does not directly address the issue of hard-coded session keys.
From a security perspective, which of the following assumptions MUST be made about input to an
application?
It is tested
It is logged
It is verified
It is untrusted
From a security perspective, the assumption that must be made about input to an application is that it is untrusted. Untrusted input is any data or information that is provided by an external or an unknown source, such as a user, a client, a network, or a file, and that is not validated or verified by the application before being processed or used by the application. Untrusted input can pose a serious security risk for the application, as it can contain or introduce malicious or harmful content or commands, such as malware, viruses, worms, trojans, or SQL injection, that can compromise or damage the confidentiality, the integrity, or the availability of the application, or the data or the systems that are connected to the application. Therefore, from a security perspective, the assumption that must be made about input to an application is that it is untrusted, and that it should be treated with caution and suspicion, and that it should be subjected to various security controls or mechanisms, such as input validation, input sanitization, input filtering, or input encoding, before being processed or used by the application. Input validation is the process or the technique of checking or verifying that the input meets the expected or the required format, type, length, range, or value, and that it does not contain or introduce any invalid or illegal characters, symbols, or commands. Input sanitization is the process or the technique of removing or modifying any invalid or illegal characters, symbols, or commands from the input, or replacing them with valid or legal ones, to prevent or mitigate any potential attacks or vulnerabilities. Input filtering is the process or the technique of allowing or blocking the input based on a predefined or a configurable set of rules or criteria, such as a whitelist or a blacklist, to prevent or mitigate any unwanted or unauthorized input. Input encoding is the process or the technique of transforming or converting the input into a different or a standard format or representation, such as HTML, URL, or Base64, to prevent or mitigate any interpretation or execution of the input by the application or the system. It is tested, it is logged, and it is verified are not the assumptions that must be made about input to an application from a security perspective, although they may be related or possible aspects or outcomes of input to an application. It is tested is an aspect or an outcome of input to an application, as it implies that the input has been subjected to various tests or evaluations, such as unit testing, integration testing, or penetration testing, to verify or validate the functionality and the quality of the input, as well as to detect or report any errors, bugs, or vulnerabilities in the input. However, it is tested is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is logged is an aspect or an outcome of input to an application, as it implies that the input has been recorded or stored in a log file or a database, along with other relevant information or metadata, such as the source, the destination, the timestamp, or the status of the input, to provide a trace or a history of the input, as well as to support the audit and the compliance activities. However, it is logged is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is verified is an aspect or an outcome of input to an application, as it implies that the input has been confirmed or authenticated by the application or the system, using various security controls or mechanisms, such as digital signatures, certificates, or tokens, to ensure the integrity and the authenticity of the input, as well as to prevent or mitigate any tampering or spoofing of the input. However, it is verified is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application.
In a High Availability (HA) environment, what is the PRIMARY goal of working with a virtual router address as the gateway to a network?
The second of two routers can periodically check in to make sure that the first router is operational.
The second of two routers can better absorb a Denial of Service (DoS) attack knowing the first router is present.
The first of two routers fails and is reinstalled, while the second handles the traffic flawlessly.
The first of two routers can better handle specific traffic, while the second handles the rest of the traffic seamlessly.
The primary goal of working with a virtual router address as the gateway to a network is to provide high availability and fault tolerance for the network. A virtual router address is an IP address that is shared by two or more physical routers that are configured to act as a single logical router. This is achieved by using a protocol such as Virtual Router Redundancy Protocol (VRRP) or Hot Standby Router Protocol (HSRP) to coordinate the status and priority of the routers. One of the routers is designated as the master or active router, and the others are backups or standby routers. The master router is responsible for forwarding traffic to and from the virtual router address, while the backups monitor the master’s health and readiness. If the master router fails or becomes unreachable, one of the backups takes over the role of the master and continues to handle the traffic without any interruption or disruption to the network. This way, the network can maintain high availability and fault tolerance, even if one of the routers fails or needs to be reinstalled. The other options are not the primary goal of working with a virtual router address, although they may be some of the benefits or features of the protocol. References: Virtual Router IP Addresses - Cisco; How to set up a virtual router | Tom’s Guide; Configuring VRRP - Cisco.
As part of an application penetration testing process, session hijacking can BEST be achieved by which of the following?
Known-plaintext attack
Denial of Service (DoS)
Cookie manipulation
Structured Query Language (SQL) injection
Cookie manipulation is a technique that allows an attacker to intercept, modify, or forge a cookie, which is a piece of data that is used to maintain the state of a web session. By manipulating the cookie, the attacker can hijack the session and gain unauthorized access to the web application. Known-plaintext attack, DoS, and SQL injection are not directly related to session hijacking, although they can be used for other purposes, such as breaking encryption, disrupting availability, or executing malicious commands. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 522.
Drag the following Security Engineering terms on the left to the BEST definition on the right.
There are different terms related to Security Engineering, which is the discipline of designing, building, and maintaining secure systems. According to [1], Security Engineering is the art and science of building dependable systems. Some common terms and their definitions are:
The following table shows the possible matching of the Security Engineering terms to their definitions:
Security Engineering terms and definitions are important to understand and apply in the context of developing, deploying, and maintaining secure systems. Security Engineering terms and definitions can help to establish a common language and framework for security professionals, stakeholders, and users, and to communicate the security objectives, requirements, and issues of the system. Security Engineering terms and definitions can also help to guide the security engineering process, which involves the following steps: security planning, security analysis, security design, security implementation, security testing, security deployment, security operation, and security maintenance. Security Engineering terms and definitions can also help to support the security certification and accreditation (C&A) process, which involves the following tasks: security categorization, security control selection, security control implementation, security control assessment, security certification, security accreditation, and security monitoring.
Who would be the BEST person to approve an organizations information security policy?
Chief Information Officer (CIO)
Chief Information Security Officer (CISO)
Chief internal auditor
Chief Executive Officer (CEO)
Section: Security Operations
When using third-party software developers, which of the following is the MOST effective method of providing software development Quality Assurance (QA)?
Retain intellectual property rights through contractual wording.
Perform overlapping code reviews by both parties.
Verify that the contractors attend development planning meetings.
Create a separate contractor development environment.
When using third-party software developers, the most effective method of providing software development Quality Assurance (QA) is to perform overlapping code reviews by both parties. Code reviews are the process of examining the source code of an application for quality, functionality, security, and compliance. Overlapping code reviews by both parties means that the code is reviewed by both the third-party developers and the contracting organization, and that the reviews cover the same or similar aspects of the code. This can ensure that the code meets the requirements and specifications, that the code is free of defects or vulnerabilities, and that the code is consistent and compatible with the existing system or environment. Retaining intellectual property rights through contractual wording, verifying that the contractors attend development planning meetings, and creating a separate contractor development environment are all possible methods of providing software development QA, but they are not the most effective method of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1026. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1050.
Which of the following is the MOST effective attack against cryptographic hardware modules?
Plaintext
Brute force
Power analysis
Man-in-the-middle (MITM)
The most effective attack against cryptographic hardware modules is power analysis. Power analysis is a type of side-channel attack that exploits the physical characteristics or behavior of a cryptographic device, such as a smart card, a hardware security module, or a cryptographic processor, to extract secret information, such as keys, passwords, or algorithms. Power analysis measures the power consumption or the electromagnetic radiation of the device, and analyzes the variations or patterns that correspond to the cryptographic operations or the data being processed. Power analysis can reveal the internal state or the logic of the device, and can bypass the security mechanisms or the tamper resistance of the device. Power analysis can be performed with low-cost and widely available equipment, and can be very difficult to detect or prevent. Plaintext, brute force, and man-in-the-middle (MITM) are not the most effective attacks against cryptographic hardware modules, as they are related to the encryption or transmission of the data, not the physical properties or behavior of the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 628. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 644.
Which of the following is a detective access control mechanism?
Log review
Least privilege
Password complexity
Non-disclosure agreement
The access control mechanism that is detective is log review. Log review is a process of examining and analyzing the records or events of the system or network activity, such as user login, file access, or network traffic, that are stored in log files. Log review can help to detect and identify any unauthorized, abnormal, or malicious access or behavior, and to provide evidence or clues for further investigation or response. Log review is a detective access control mechanism, as it can discover or reveal the occurrence or the source of the security incidents or violations, after they have happened. Least privilege, password complexity, and non-disclosure agreement are not detective access control mechanisms, as they are related to the restriction, protection, or confidentiality of the access or information, not the detection or identification of the security incidents or violations. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 932. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 948.
What is the MOST effective method for gaining unauthorized access to a file protected with a long complex password?
Brute force attack
Frequency analysis
Social engineering
Dictionary attack
The most effective method for gaining unauthorized access to a file protected with a long complex password is social engineering. Social engineering is a type of attack that exploits the human factor or the psychological weaknesses of the target, such as trust, curiosity, greed, or fear, to manipulate them into revealing sensitive information, such as passwords, or performing malicious actions, such as opening malicious attachments or clicking malicious links. Social engineering can bypass the technical security controls, such as encryption or authentication, and can be more efficient and successful than other methods that rely on brute force or guesswork. Brute force attack, frequency analysis, and dictionary attack are not the most effective methods for gaining unauthorized access to a file protected with a long complex password, as they require a lot of time, resources, and computing power, and they can be thwarted by the use of strong passwords, password policies, or password managers. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
When determining appropriate resource allocation, which of the following is MOST important to monitor?
Number of system compromises
Number of audit findings
Number of staff reductions
Number of additional assets
The most important factor to monitor when determining appropriate resource allocation is the number of system compromises. The number of system compromises is the count or the frequency of the security incidents or breaches that affect the confidentiality, the integrity, or the availability of the system data or functionality, and that are caused by the unauthorized or the malicious access or activity. The number of system compromises can help to determine appropriate resource allocation, as it can indicate the level of security risk or threat that the system faces, and the level of security protection or improvement that the system needs. The number of system compromises can also help to evaluate the effectiveness or the efficiency of the current resource allocation, and to identify the areas or the domains that require more or less resources. Number of audit findings, number of staff reductions, and number of additional assets are not the most important factors to monitor when determining appropriate resource allocation, as they are related to the results or the outcomes of the audit process, the changes or the impacts of the staff size, or the additions or the expansions of the system resources, not the security incidents or breaches that affect the system data or functionality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 863. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 879.
Which of the following describes the concept of a Single Sign -On (SSO) system?
Users are authenticated to one system at a time.
Users are identified to multiple systems with several credentials.
Users are authenticated to multiple systems with one login.
Only one user is using the system at a time.
Single Sign-On (SSO) is a technology that allows users to securely access multiple applications and services using just one set of credentials, such as a username and a password56
With SSO, users do not have to remember and enter multiple passwords for different applications and services, which can improve their convenience and productivity. SSO also enhances security, as users can use stronger passwords, avoid reusing passwords, and comply with password policies more easily. Moreover, SSO reduces the risk of phishing, credential theft, and password fatigue56
SSO is based on the concept of federated identity, which means that the identity of a user is shared and trusted across different systems that have established a trust relationship. SSO uses various protocols and standards, such as SAML, OAuth, OIDC, and Kerberos, to enable the exchange of identity information and authentication tokens between the systems56
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following solutions would have MOST likely detected the use of peer-to-peer programs when the computer was connected to the office network?
Anti-virus software
Intrusion Prevention System (IPS)
Anti-spyware software
Integrity checking software
The best solution to detect the use of P2P programs when the computer was connected to the office network is an Intrusion Prevention System (IPS). An IPS is a device or a software that monitors, analyzes, and blocks the network traffic based on the predefined rules or policies, and that can prevent or stop any unauthorized or malicious access or activity on the network, such as P2P programs. An IPS can detect the use of P2P programs by inspecting the network packets, identifying the P2P protocols or signatures, and blocking or dropping the P2P traffic. Anti-virus software, anti-spyware software, and integrity checking software are not the best solutions to detect the use of P2P programs when the computer was connected to the office network, as they are related to the protection, removal, or verification of the software or files on the computer, not the monitoring, analysis, or blocking of the network traffic. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 512. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 528.
Which of the following is a MAJOR consideration in implementing a Voice over IP (VoIP) network?
Use of a unified messaging.
Use of separation for the voice network.
Use of Network Access Control (NAC) on switches.
Use of Request for Comments (RFC) 1918 addressing.
The use of Network Access Control (NAC) on switches is a major consideration in implementing a Voice over IP (VoIP) network. NAC is a mechanism that enforces security policies on the network devices, such as switches, routers, firewalls, and servers. NAC can prevent unauthorized or compromised devices from accessing the network, or limit their access to specific segments or resources. NAC can also monitor and remediate the devices for compliance with the security policies, such as patch level, antivirus status, or configuration settings. NAC can enhance the security and performance of a VoIP network, as well as reduce the operational costs and risks. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 473; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 353.
For a service provider, which of the following MOST effectively addresses confidentiality concerns for customers using cloud computing?
Hash functions
Data segregation
File system permissions
Non-repudiation controls
For a service provider, data segregation is the most effective way to address confidentiality concerns for customers using cloud computing. Data segregation is the process of separating the data of different customers or tenants in a shared cloud environment, so that they cannot access or interfere with each other’s data. Data segregation can be achieved by using encryption, access control, virtualization, or other techniques. Data segregation can help to protect the confidentiality, integrity, and availability of the customer’s data, as well as to comply with the privacy and regulatory requirements. Hash functions, file system permissions, and non-repudiation controls are not the most effective ways to address confidentiality concerns for customers using cloud computing, as they do not provide the same level of isolation and protection as data segregation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 337. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, Security Architecture and Engineering, page 353.
With data labeling, which of the following MUST be the key decision maker?
Information security
Departmental management
Data custodian
Data owner
With data labeling, the data owner must be the key decision maker. The data owner is the person or entity that has the authority and responsibility for the data, including its classification, protection, and usage. The data owner must decide how to label the data according to its sensitivity, criticality, and value, and communicate the labeling scheme to the data custodians and users. The data owner must also review and update the data labels as needed. The other options are not the key decision makers for data labeling, as they either do not have the authority or responsibility for the data (A, B, and C), or do not have the knowledge or interest in the data (B and C). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 63; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 69.
Refer to the information below to answer the question.
A large organization uses unique identifiers and requires them at the start of every system session. Application access is based on job classification. The organization is subject to periodic independent reviews of access controls and violations. The organization uses wired and wireless networks and remote access. The organization also uses secure connections to branch offices and secure backup and recovery strategies for selected information and processes.
What MUST the access control logs contain in addition to the identifier?
Time of the access
Security classification
Denied access attempts
Associated clearance
The access control logs must contain the time of the access, in addition to the identifier. Access control logs are the records or the files that capture and store the information or the data related to the access control events or activities, such as the authentication, the authorization, the audit, or the accountability. Access control logs can help to monitor and analyze the access control performance and effectiveness, to detect and investigate any security incidents or breaches, and to provide evidence or proof for any legal or regulatory actions. The access control logs must contain the time of the access, as it can help to identify and verify when the access control event or activity occurred, and to correlate and compare it with other events or activities, such as the network traffic, the system activity, or the user behavior. The time of the access can also help to determine the duration and the frequency of the access control event or activity, and to measure and evaluate the access control efficiency and quality. The security classification, the denied access attempts, and the associated clearance are not the information that must be contained in the access control logs, as they are related to the level of sensitivity or protection of the data or the resource, the unsuccessful or rejected access control requests, or the level of authorization or permission of the user or the device, not the time of the access control event or activity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 671. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 687.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following methods is the MOST effective way of removing the Peer-to-Peer (P2P) program from the computer?
Run software uninstall
Re-image the computer
Find and remove all installation files
Delete all cookies stored in the web browser cache
The most effective way of removing the P2P program from the computer is to re-image the computer. Re-imaging the computer means to restore the computer to its original or desired state, by erasing or overwriting the existing data or software on the computer, and by installing a new or a backup image of the operating system and the applications on the computer. Re-imaging the computer can ensure that the P2P program and any other unwanted or harmful programs or files are completely removed from the computer, and that the computer is clean and secure. Run software uninstall, find and remove all installation files, and delete all cookies stored in the web browser cache are not the most effective ways of removing the P2P program from the computer, as they may not remove all the traces or components of the P2P program from the computer, or they may not address the other potential issues or risks that the P2P program may have caused on the computer. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 906. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 922.
Which of the following is the BEST solution to provide redundancy for telecommunications links?
Provide multiple links from the same telecommunications vendor.
Ensure that the telecommunications links connect to the network in one location.
Ensure that the telecommunications links connect to the network in multiple locations.
Provide multiple links from multiple telecommunications vendors.
The best solution to provide redundancy for telecommunications links is to provide multiple links from multiple telecommunications vendors. Redundancy is the ability to maintain the availability and functionality of a system or network in the event of a failure or disruption. By providing multiple links from multiple telecommunications vendors, the organization can ensure that there is always an alternative path for data transmission, and that the failure or outage of one vendor does not affect the entire network. Providing multiple links from the same telecommunications vendor, ensuring that the telecommunications links connect to the network in one location, and ensuring that the telecommunications links connect to the network in multiple locations are not the best solutions to provide redundancy for telecommunications links, as they do not offer the same level of diversity, resilience, and fault tolerance as providing multiple links from multiple telecommunications vendors. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 504. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 520.
A thorough review of an organization's audit logs finds that a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient. What type of attack has MOST likely occurred?
Spoofing
Eavesdropping
Man-in-the-middle
Denial of service
The type of attack that has most likely occurred when a disgruntled network administrator has intercepted emails meant for the Chief Executive Officer (CEO) and changed them before forwarding them to their intended recipient is a man-in-the-middle (MITM) attack. A MITM attack is a type of attack that involves an attacker intercepting, modifying, or redirecting the communication between two parties, without their knowledge or consent. The attacker can alter, delete, or inject data, or impersonate one of the parties, to achieve malicious goals, such as stealing information, compromising security, or disrupting service. A MITM attack can be performed on various types of networks or protocols, such as email, web, or wireless. Spoofing, eavesdropping, and denial of service are not the types of attack that have most likely occurred in this scenario, as they do not involve the modification or manipulation of the communication between the parties, but rather the falsification, observation, or prevention of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
Refer to the information below to answer the question.
An organization experiencing a negative financial impact is forced to reduce budgets and the number of Information Technology (IT) operations staff performing basic logical access security administration functions. Security processes have been tightly integrated into normal IT operations and are not separate and distinct roles.
Which of the following will indicate where the IT budget is BEST allocated during this time?
Policies
Frameworks
Metrics
Guidelines
The best indicator of where the IT budget is best allocated during this time is the metrics. The metrics are the measurements or the indicators of the performance, the effectiveness, the efficiency, or the quality of the IT processes, activities, or outcomes. The metrics can help to allocate the IT budget in a rational, objective, and evidence-based manner, as they can show the value, the impact, or the return of the IT investments, and they can identify the gaps, the risks, or the opportunities for the IT improvement or enhancement. The metrics can also help to justify, communicate, or report the IT budget allocation to the senior management or the stakeholders, and to align the IT budget allocation with the business needs and requirements. Policies, frameworks, and guidelines are not the best indicators of where the IT budget is best allocated during this time, as they are related to the documents or the models that define, guide, or standardize the IT processes, activities, or outcomes, not the measurements or the indicators of the IT performance, effectiveness, efficiency, or quality. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 38. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 53.
A security manager has noticed an inconsistent application of server security controls resulting in vulnerabilities on critical systems. What is the MOST likely cause of this issue?
A lack of baseline standards
Improper documentation of security guidelines
A poorly designed security policy communication program
Host-based Intrusion Prevention System (HIPS) policies are ineffective
The most likely cause of the inconsistent application of server security controls resulting in vulnerabilities on critical systems is a lack of baseline standards. Baseline standards are the minimum level of security controls and measures that must be applied to the servers or other assets to ensure their protection and compliance. Baseline standards help to establish a consistent and uniform security posture across the organization, and to prevent or reduce the exposure to threats and risks. If there is a lack of baseline standards, the server security controls may vary in quality, effectiveness, or completeness, resulting in vulnerabilities on critical systems. Improper documentation of security guidelines, a poorly designed security policy communication program, and ineffective Host-based Intrusion Prevention System (HIPS) policies are not the most likely causes of this issue, as they do not directly affect the application of server security controls or the existence of baseline standards. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 35. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Which of the following methods provides the MOST protection for user credentials?
Forms-based authentication
Digest authentication
Basic authentication
Self-registration
The method that provides the most protection for user credentials is digest authentication. Digest authentication is a type of authentication that verifies the identity of a user or a device by using a cryptographic hash function to transform the user credentials, such as username and password, into a digest or a hash value, before sending them over a network, such as the internet. Digest authentication can provide more protection for user credentials than basic authentication, which sends the user credentials in plain text, or forms-based authentication, which relies on the security of the web server or the web application. Digest authentication can prevent the interception, disclosure, or modification of the user credentials by third parties, and can also prevent replay attacks by using a nonce or a random value. Self-registration is not a method of authentication, but a process of creating a user account or a profile by providing some personal information, such as name, email, or phone number. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Host-Based Intrusion Protection (HIPS) systems are often deployed in monitoring or learning mode during their initial implementation. What is the objective of starting in this mode?
Automatically create exceptions for specific actions or files
Determine which files are unsafe to access and blacklist them
Automatically whitelist actions or files known to the system
Build a baseline of normal or safe system events for review
A Host-Based Intrusion Protection (HIPS) system is a software that monitors and blocks malicious activities on a single host, such as a computer or a server. A HIPS system can also prevent unauthorized changes to the system configuration, files, or registry12
During the initial implementation, a HIPS system is often deployed in monitoring or learning mode, which means that it observes the normal behavior of the system and the applications running on it, without blocking or alerting on any events. The objective of starting in this mode is to automatically create exceptions for specific actions or files that are legitimate and safe, but may otherwise trigger false alarms or unwanted blocks by the HIPS system34
By creating exceptions, the HIPS system can reduce the number of false positives and improve its accuracy and efficiency. However, the monitoring or learning mode should not last too long, as it may also expose the system to potential attacks that are not detected or prevented by the HIPS system. Therefore, after a sufficient baseline of normal behavior is established, the HIPS system should be switched to a more proactive mode, such as alerting or blocking mode, which can actively respond to suspicious or malicious events
Which of the following is critical for establishing an initial baseline for software components in the operation and maintenance of applications?
Application monitoring procedures
Configuration control procedures
Security audit procedures
Software patching procedures
Configuration control procedures are critical for establishing an initial baseline for software components in the operation and maintenance of applications. Configuration control procedures are the processes and activities that ensure the integrity, consistency, and traceability of the software components throughout the SDLC. Configuration control procedures include identifying, documenting, storing, reviewing, approving, and updating the software components, as well as managing the changes and versions of the components. By establishing an initial baseline, the organization can have a reference point for measuring and evaluating the performance, quality, and security of the software components, and for applying and tracking the changes and updates to the components. The other options are not as critical as configuration control procedures, as they either do not establish an initial baseline (A and C), or do not apply to all software components (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 468; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 568.
When implementing a secure wireless network, which of the following supports authentication and authorization for individual client endpoints.
Temporal Key Integrity Protocol (TKIP)
Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK)
Wi-Fi Protected Access 2 (WPA2) Enterprise
Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)
When implementing a secure wireless network, the option that supports authentication and authorization for individual client endpoints is Wi-Fi Protected Access 2 (WPA2) Enterprise. WPA2 is a security protocol that provides encryption and authentication for wireless networks, based on the IEEE 802.11i standard. WPA2 has two modes: Personal and Enterprise. WPA2 Personal uses a Pre-Shared Key (PSK) that is shared among all the devices on the network, and does not require a separate authentication server. WPA2 Enterprise uses an Extensible Authentication Protocol (EAP) that authenticates each device individually, using a username and password or a certificate, and requires a Remote Authentication Dial-In User Service (RADIUS) server or another authentication server. WPA2 Enterprise provides more security and granularity than WPA2 Personal, as it can support different levels of access and permissions for different users or groups, and can prevent unauthorized or compromised devices from joining the network. Temporal Key Integrity Protocol (TKIP), Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK), and Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) are not the options that support authentication and authorization for individual client endpoints, as they are related to the encryption or integrity of the wireless data, not the identity or access of the wireless devices. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 506. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 522.
An organization decides to implement a partial Public Key Infrastructure (PKI) with only the servers having digital certificates. What is the security benefit of this implementation?
Clients can authenticate themselves to the servers.
Mutual authentication is available between the clients and servers.
Servers are able to issue digital certificates to the client.
Servers can authenticate themselves to the client.
A Public Key Infrastructure (PKI) is a system that provides the services and mechanisms for creating, managing, distributing, using, storing, and revoking digital certificates, which are electronic documents that bind a public key to an identity. A digital certificate can be used to authenticate the identity of an entity, such as a person, a device, or a server, that possesses the corresponding private key. An organization can implement a partial PKI with only the servers having digital certificates, which means that only the servers can prove their identity to the clients, but not vice versa. The security benefit of this implementation is that servers can authenticate themselves to the client, which can prevent impersonation, spoofing, or man-in-the-middle attacks by malicious servers. Clients can authenticate themselves to the servers, mutual authentication is available between the clients and servers, and servers are able to issue digital certificates to the client are not the security benefits of this implementation, as they require the clients to have digital certificates as well. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Cryptography and Symmetric Key Algorithms, page 631.
What is the PRIMARY advantage of using automated application security testing tools?
The application can be protected in the production environment.
Large amounts of code can be tested using fewer resources.
The application will fail less when tested using these tools.
Detailed testing of code functions can be performed.
Automated application security testing tools are software tools that can scan, analyze, and test the code of an application for vulnerabilities, errors, or flaws. The primary advantage of using these tools is that they can test large amounts of code using fewer resources, such as time, money, and human effort, than manual testing. This can improve the efficiency, effectiveness, and coverage of the testing process. The application can be protected in the production environment, the application will fail less when tested using these tools, and detailed testing of code functions can be performed are all possible outcomes of using automated application security testing tools, but they are not the primary advantage of using them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1017. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1039.
What is the MOST critical factor to achieve the goals of a security program?
Capabilities of security resources
Executive management support
Effectiveness of security management
Budget approved for security resources
The most critical factor to achieve the goals of a security program is the executive management support. The executive management is the highest level of authority or decision-making in the organization, such as the board of directors, the chief executive officer, or the chief information officer. The executive management support is the endorsement, the sponsorship, or the involvement of the executive management in the security program, such as the security planning, the security implementation, the security monitoring, or the security auditing. The executive management support is the most critical factor to achieve the goals of the security program, as it can provide the vision, the direction, or the strategy for the security program, and it can align the security program with the business needs and requirements. The executive management support can also provide the resources, the budget, or the authority for the security program, and it can foster the security culture, the security awareness, or the security governance in the organization. The executive management support can also influence the stakeholders, the customers, or the regulators, and it can demonstrate the commitment, the accountability, or the responsibility for the security program. Capabilities of security resources, effectiveness of security management, and budget approved for security resources are not the most critical factors to achieve the goals of the security program, as they are related to the skills, the performance, or the funding of the security program, not the endorsement, the sponsorship, or the involvement of the executive management in the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 33. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
Which of the following is the BEST way to determine if a particular system is able to identify malicious software without executing it?
Testing with a Botnet
Testing with an EICAR file
Executing a binary shellcode
Run multiple antivirus programs
The best way to determine if a particular system is able to identify malicious software without executing it is to test it with an EICAR file. An EICAR file is a standard file that is used to test the functionality and performance of antivirus software, without using any real malware. An EICAR file is a harmless text file that contains a specific string of characters that is recognized by most antivirus software as a virus signature. An EICAR file can be used to check if the antivirus software is installed, configured, updated, and working properly, without risking any damage or infection to the system. Testing with a botnet, executing a binary shellcode, and running multiple antivirus programs are not the best ways to determine if a particular system is able to identify malicious software without executing it, as they may involve using or creating actual malware, which can be dangerous, illegal, or unethical, and may compromise the security or performance of the system. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, Security Assessment and Testing, page 813. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, Security Assessment and Testing, page 829.
Which of the following provides the MOST protection against data theft of sensitive information when a laptop is stolen?
Set up a BIOS and operating system password
Encrypt the virtual drive where confidential files can be stored
Implement a mandatory policy in which sensitive data cannot be stored on laptops, but only on the corporate network
Encrypt the entire disk and delete contents after a set number of failed access attempts
Encrypting the entire disk and deleting the contents after a set number of failed access attempts provides the most protection against data theft of sensitive information when a laptop is stolen. This method ensures that the data is unreadable without the correct decryption key, and that the data is erased if someone tries to guess the key or bypass the encryption. Setting up a BIOS and operating system password, encrypting the virtual drive, or implementing a policy are less effective methods, as they can be circumvented by physical access, booting from another device, or copying the data to another location. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, p. 269; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), p. 521.
What component of a web application that stores the session state in a cookie an attacker can bypass?
An initialization check
An identification check
An authentication check
An authorization check
An authorization check is a component of a web application that stores the session state in a cookie that can be bypassed by an attacker. An authorization check verifies that the user has the appropriate permissions to access the requested resources or perform the desired actions. However, if the session state is stored in a cookie, an attacker can manipulate the cookie to change the user’s role or privileges, and bypass the authorization check. Therefore, it is recommended to store the session state on the server side, or use encryption and integrity protection for the cookie. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, p. 1015; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, p. 503.
Which of the following BEST mitigates a replay attack against a system using identity federation and Security Assertion Markup Language (SAML) implementation?
Two-factor authentication
Digital certificates and hardware tokens
Timed sessions and Secure Socket Layer (SSL)
Passwords with alpha-numeric and special characters
The best way to mitigate a replay attack against a system using identity federation and Security Assertion Markup Language (SAML) implementation is to use timed sessions and Secure Socket Layer (SSL). A replay attack is a type of network attack that involves capturing and retransmitting a valid message or data to gain unauthorized access or perform malicious actions. Identity federation is a process that enables the sharing of identity information across different security domains, such as different organizations or applications. SAML is a standard protocol that enables identity federation by using XML-based assertions to exchange authentication and authorization information. To prevent a replay attack, the system can use timed sessions and SSL. Timed sessions are sessions that have a limited duration and expire after a certain period of time or inactivity. SSL is a protocol that provides encryption and authentication for data transmission over the internet. By using timed sessions and SSL, the system can ensure that the SAML assertions are valid, fresh, and secure, and that they cannot be reused or tampered with by an attacker. Two-factor authentication, digital certificates and hardware tokens, and passwords with alpha-numeric and special characters are not the best ways to mitigate a replay attack against a system using identity federation and SAML implementation, as they do not address the specific vulnerabilities of the SAML protocol or the network transmission. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 462. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 478.
An organization publishes and periodically updates its employee policies in a file on their intranet. Which of the following is a PRIMARY security concern?
Availability
Confidentiality
Integrity
Ownership
The primary security concern for an organization that publishes and periodically updates its employee policies in a file on their intranet is integrity. Integrity is the property that ensures that the data or the information is accurate, complete, consistent, and authentic, and that it has not been modified, altered, or corrupted by unauthorized or malicious parties. Integrity is a primary security concern for the employee policies file on the intranet, as it can affect the compliance, trust, and reputation of the organization, and the rights and responsibilities of the employees. The employee policies file must reflect the current and valid policies of the organization, and must not be changed or tampered with by anyone who is not authorized or qualified to do so. Availability, confidentiality, and ownership are not the primary security concerns for the employee policies file on the intranet, as they are related to the accessibility, protection, or attribution of the data or the information, not the accuracy or the authenticity of the data or the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 20. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 33.
From a security perspective, which of the following is a best practice to configure a Domain Name Service (DNS) system?
Configure secondary servers to use the primary server as a zone forwarder.
Block all Transmission Control Protocol (TCP) connections.
Disable all recursive queries on the name servers.
Limit zone transfers to authorized devices.
From a security perspective, the best practice to configure a DNS system is to limit zone transfers to authorized devices. Zone transfers are the processes of replicating the DNS data from one server to another, usually from a primary server to a secondary server. Zone transfers can expose sensitive information about the network topology, hosts, and services to attackers, who can use this information to launch further attacks. Therefore, zone transfers should be restricted to only the devices that need them, and authenticated and encrypted to prevent unauthorized access or modification. The other options are not as good as limiting zone transfers, as they either do not provide sufficient security for the DNS system (A and B), or do not address the zone transfer issue ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 156; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 166.
Which of the following is the BEST countermeasure to brute force login attacks?
Changing all canonical passwords
Decreasing the number of concurrent user sessions
Restricting initial password delivery only in person
Introducing a delay after failed system access attempts
The best countermeasure to brute force login attacks is to introduce a delay after failed system access attempts. A brute force login attack is a type of attack that tries to guess the username and password of a system or account by using a large number of possible combinations, usually with the help of automated tools or scripts. A delay after failed system access attempts is a security mechanism that imposes a waiting time or a penalty before allowing another login attempt, after a certain number of unsuccessful attempts. This can slow down or discourage the brute force login attack, as it increases the time and effort required to find the correct credentials. Changing all canonical passwords, decreasing the number of concurrent user sessions, and restricting initial password delivery only in person are not the best countermeasures to brute force login attacks, as they do not directly address the frequency or speed of the login attempts or the use of automated tools or scripts. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
Which of the following is the PRIMARY benefit of a formalized information classification program?
It drives audit processes.
It supports risk assessment.
It reduces asset vulnerabilities.
It minimizes system logging requirements.
A formalized information classification program is a set of policies and procedures that define the categories, criteria, and responsibilities for classifying information assets according to their value, sensitivity, and criticality. The primary benefit of such a program is that it supports risk assessment, which is the process of identifying, analyzing, and evaluating the risks to the information assets and the organization. By classifying information assets, the organization can prioritize the protection of the most important and vulnerable assets, determine the appropriate security controls and measures, and allocate the necessary resources and budget. It drives audit processes, it reduces asset vulnerabilities, and it minimizes system logging requirements are all possible benefits of a formalized information classification program, but they are not the primary benefit of doing so. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 39. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 52.
Which of the following violates identity and access management best practices?
User accounts
System accounts
Generic accounts
Privileged accounts
The type of accounts that violates identity and access management best practices is generic accounts. Generic accounts are accounts that are shared by multiple users or devices, and do not have a specific or unique identity associated with them. Generic accounts are often used for convenience, compatibility, or legacy reasons, but they pose a serious security risk, as they can compromise the accountability, traceability, and auditability of the actions and activities performed by the users or devices. Generic accounts can also enable unauthorized or malicious access, as they may have weak or default passwords, or may not have proper access control or monitoring mechanisms. User accounts, system accounts, and privileged accounts are not the types of accounts that violate identity and access management best practices, as they are accounts that have a specific or unique identity associated with them, and can be subject to proper authentication, authorization, and auditing measures. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 660. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 676.
Multi-Factor Authentication (MFA) is necessary in many systems given common types of password attacks. Which of the following is a correct list of password attacks?
Masquerading, salami, malware, polymorphism
Brute force, dictionary, phishing, keylogger
Zeus, netbus, rabbit, turtle
Token, biometrics, IDS, DLP
The correct list of password attacks is brute force, dictionary, phishing, and keylogger. Password attacks are the attacks that aim to guess, crack, or steal the passwords or the credentials of the users or the systems, and to gain unauthorized or malicious access to the information or the resources. Password attacks can include the following methods: - Brute force is a method that tries all possible combinations of characters or symbols until the correct password is found. - Dictionary is a method that uses a list of common or likely words or phrases as the input for guessing the password. - Phishing is a method that uses fraudulent emails or websites that impersonate legitimate entities or parties, and that trick the users into revealing their passwords or credentials. - Keylogger is a method that uses a software or a hardware device that records the keystrokes of the users, and that captures or transmits their passwords or credentials. Masquerading, salami, malware, and polymorphism are not password attacks, as they are related to the impersonation, manipulation, infection, or mutation of the data or the systems, not the guessing, cracking, or stealing of the passwords or the credentials. Zeus, netbus, rabbit, and turtle are not password attacks, as they are the names of specific types of malware, such as trojans, worms, or viruses, not the methods of attacking the passwords or the credentials. Token, biometrics, IDS, and DLP are not password attacks, as they are the types of security controls or technologies, such as authentication, identification, detection, or prevention, not the attacks on the passwords or the credentials. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 684. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 700.
According to best practice, which of the following groups is the MOST effective in performing an information security compliance audit?
In-house security administrators
In-house Network Team
Disaster Recovery (DR) Team
External consultants
According to best practice, the most effective group in performing an information security compliance audit is external consultants. External consultants are independent and objective third parties that can provide unbiased and impartial assessment of the organization’s compliance with the security policies, standards, and regulations. External consultants can also bring expertise, experience, and best practices from other organizations and industries, and offer recommendations for improvement. The other options are not as effective as external consultants, as they either have a conflict of interest or lack of independence (A and B), or do not have the primary role or responsibility of conducting compliance audits ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 240; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 302.
Refer to the information below to answer the question.
A new employee is given a laptop computer with full administrator access. This employee does not have a personal computer at home and has a child that uses the computer to send and receive e-mail, search the web, and use instant messaging. The organization’s Information Technology (IT) department discovers that a peer-to-peer program has been installed on the computer using the employee's access.
Which of the following documents explains the proper use of the organization's assets?
Human resources policy
Acceptable use policy
Code of ethics
Access control policy
The document that explains the proper use of the organization’s assets is the acceptable use policy. An acceptable use policy is a document that defines the rules and guidelines for the appropriate and responsible use of the organization’s information systems and resources, such as computers, networks, or devices. An acceptable use policy can help to prevent or reduce the misuse, abuse, or damage of the organization’s assets, and to protect the security, privacy, and reputation of the organization and its users. An acceptable use policy can also specify the consequences or penalties for violating the policy, such as disciplinary actions, termination, or legal actions. A human resources policy, a code of ethics, and an access control policy are not the documents that explain the proper use of the organization’s assets, as they are related to the management, values, or authorization of the organization’s employees or users, not the usage or responsibility of the organization’s information systems or resources. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 47. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 62.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which of the following is an effective method for avoiding magnetic media data remanence?
Degaussing
Encryption
Data Loss Prevention (DLP)
Authentication
Degaussing is an effective method for avoiding magnetic media data remanence, which is the residual representation of data that remains on a storage device after it has been erased or overwritten. Degaussing is a process of applying a strong magnetic field to the storage device, such as a hard disk or a tape, to erase the data and destroy the magnetic alignment of the media. Degaussing can ensure that the data is unrecoverable, even by forensic tools or techniques. Encryption, DLP, and authentication are not methods for avoiding magnetic media data remanence, as they do not erase the data from the storage device, but rather protect it from unauthorized access or disclosure. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 631. : CISSP For Dummies, 7th Edition, Chapter 9, page 251.
Which of the following is a security feature of Global Systems for Mobile Communications (GSM)?
It uses a Subscriber Identity Module (SIM) for authentication.
It uses encrypting techniques for all communications.
The radio spectrum is divided with multiple frequency carriers.
The signal is difficult to read as it provides end-to-end encryption.
A security feature of Global Systems for Mobile Communications (GSM) is that it uses a Subscriber Identity Module (SIM) for authentication. A SIM is a smart card that contains the subscriber’s identity, phone number, network information, and encryption keys. The SIM is inserted into the mobile device and communicates with the network to authenticate the subscriber and establish a secure connection. The SIM also stores the subscriber’s contacts, messages, and preferences. The SIM provides security by preventing unauthorized access to the subscriber’s account and data, and by allowing the subscriber to easily switch devices without losing their information12. References: 1: GSM - Security and Encryption32: Introduction to GSM security
Multi-threaded applications are more at risk than single-threaded applications to
race conditions.
virus infection.
packet sniffing.
database injection.
Multi-threaded applications are more at risk than single-threaded applications to race conditions. A race condition is a type of concurrency error that occurs when two or more threads access or modify the same shared resource without proper synchronization or coordination. This may result in inconsistent, unpredictable, or erroneous outcomes, as the final result depends on the timing and order of the thread execution. Race conditions can compromise the security, reliability, and functionality of the application, and can lead to data corruption, memory leaks, deadlock, or privilege escalation12. References: 1: What is a Race Condition?32: Race Conditions - OWASP Cheat Sheet Series4
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
Which of the following is a network intrusion detection technique?
Statistical anomaly
Perimeter intrusion
Port scanning
Network spoofing
Statistical anomaly is a network intrusion detection technique that compares the current network activity with a baseline of normal behavior, and detects any deviations that exceed a predefined threshold. Statistical anomaly can identify unknown or novel attacks, but it may also generate false positives if the baseline is not updated or the threshold is not set properly12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 7702: CISSP For Dummies, 7th Edition, Chapter 7, page 237.
Which of the following BEST represents the principle of open design?
Disassembly, analysis, or reverse engineering will reveal the security functionality of the computer system.
Algorithms must be protected to ensure the security and interoperability of the designed system.
A knowledgeable user should have limited privileges on the system to prevent their ability to compromise security capabilities.
The security of a mechanism should not depend on the secrecy of its design or implementation.
This is the principle of open design, which states that the security of a system or mechanism should rely on the strength of its key or algorithm, rather than on the obscurity of its design or implementation. This principle is based on the assumption that the adversary has full knowledge of the system or mechanism, and that the security should still hold even if that is the case. The other options are not consistent with the principle of open design, as they either imply that the security depends on hiding or protecting the design or implementation (A and B), or that the user’s knowledge or privileges affect the security ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 109.
Which one of the following effectively obscures network addresses from external exposure when implemented on a firewall or router?
Network Address Translation (NAT)
Application Proxy
Routing Information Protocol (RIP) Version 2
Address Masking
Network Address Translation (NAT) is the most effective method for obscuring network addresses from external exposure when implemented on a firewall or router. NAT is a technique that allows a device, such as a firewall or a router, to modify the source or destination IP address of a packet as it passes through the device3. NAT can be used to hide the internal IP addresses of a network from the external network, such as the internet, by replacing them with a public IP address. This can enhance the security and privacy of the network, as well as conserve the limited IPv4 address space. Application proxy, RIP version 2, and address masking are not methods for obscuring network addresses from external exposure, as they are either related to different functions or not implemented on a firewall or router. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 4, page 196. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 413.
An external attacker has compromised an organization's network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker's ability to gain further information?
Implement packet filtering on the network firewalls
Require strong authentication for administrators
Install Host Based Intrusion Detection Systems (HIDS)
Implement logical network segmentation at the switches
The most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information is to implement logical network segmentation at the switches. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, role, or access level. This way, the organization can isolate the traffic and data of different segments, and limit the exposure and impact of an attack. If the attacker has installed a sniffer onto an inside computer, logical network segmentation can prevent the sniffer from capturing the traffic and data of other segments, thus reducing the information leakage. The other options are not as effective as logical network segmentation, as they either do not prevent the sniffer from capturing the traffic and data (A and B), or do not detect or stop the attack ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 163; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 173.
Why must all users be positively identified prior to using multi-user computers?
To provide access to system privileges
To provide access to the operating system
To ensure that unauthorized persons cannot access the computers
To ensure that management knows what users are currently logged on
The main reason why all users must be positively identified prior to using multi-user computers is to ensure that unauthorized persons cannot access the computers. Positive identification is the process of verifying the identity of a user or a device before granting access to a system or a resource2. Positive identification can be achieved by using one or more factors of authentication, such as something the user knows, has, or is. Positive identification can enhance the security and accountability of the system, and prevent unauthorized or malicious access. Providing access to system privileges, providing access to the operating system, and ensuring that management knows what users are currently logged on are not the primary reasons why all users must be positively identified prior to using multi-user computers, as they are more related to the functionality or administration of the system, rather than the security. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 89.
Which of the following is a physical security control that protects Automated Teller Machines (ATM) from skimming?
Anti-tampering
Secure card reader
Radio Frequency (RF) scanner
Intrusion Prevention System (IPS)
A secure card reader is a physical security control that protects ATM from skimming, which is a type of fraud where a device is attached to the card slot of an ATM to capture the data from the magnetic stripe of the card1. A secure card reader can prevent skimming by encrypting the data at the point of entry, making it unreadable by the skimming device. Anti-tampering, RF scanner, and IPS are not physical security controls that protect ATM from skimming, as they do not prevent the capture of the card data by the skimming device. References: 1: CISSP For Dummies, 7th Edition, Chapter 9, page 249.
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
A practice that permits the owner of a data object to grant other users access to that object would usually provide
Mandatory Access Control (MAC).
owner-administered control.
owner-dependent access control.
Discretionary Access Control (DAC).
A practice that permits the owner of a data object to grant other users access to that object would usually provide Discretionary Access Control (DAC). DAC is a type of access control that allows the data owner or creator to decide who can access or modify the data object, based on their identity or membership in a group. DAC is implemented using access control lists (ACLs), which specify the permissions or rights of each user or group for each data object. DAC is flexible and easy to implement, but it can also pose a security risk if the data owner grants excessive or inappropriate access to unauthorized or malicious users. Mandatory Access Control (MAC), owner-administered control, and owner-dependent access control are not types of access control that permit the owner of a data object to grant other users access to that object, as they are either based on predefined rules or policies, or not related to access control at all. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 354.
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to what?
Interface with the Public Key Infrastructure (PKI)
Improve the quality of security software
Prevent Denial of Service (DoS) attacks
Establish a secure initial state
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to establish a secure initial state. A TPM is a hardware device that provides cryptographic functions and secure storage for keys, certificates, passwords, and other sensitive data. A TPM can also measure and verify the integrity of the system components, such as the BIOS, boot loader, operating system, and applications, before they are executed. This process is known as trusted boot or measured boot, and it ensures that the system is in a known and trusted state before allowing access to the user or network. A TPM can also enable features such as disk encryption, remote attestation, and platform authentication12. References: 1: What is a Trusted Platform Module (TPM)?32: Trusted Platform Module (TPM) Fundamentals4
An auditor carrying out a compliance audit requests passwords that are encrypted in the system to verify that the passwords are compliant with policy. Which of the following is the BEST response to the auditor?
Provide the encrypted passwords and analysis tools to the auditor for analysis.
Analyze the encrypted passwords for the auditor and show them the results.
Demonstrate that non-compliant passwords cannot be created in the system.
Demonstrate that non-compliant passwords cannot be encrypted in the system.
The best response to the auditor is to demonstrate that the system enforces the password policy and does not allow non-compliant passwords to be created. This way, the auditor can verify the compliance without compromising the confidentiality or integrity of the encrypted passwords. Providing the encrypted passwords and analysis tools to the auditor (A) may expose the passwords to unauthorized access or modification. Analyzing the encrypted passwords for the auditor and showing them the results (B) may not be sufficient to convince the auditor of the compliance, as the results could be manipulated or falsified. Demonstrating that non-compliant passwords cannot be encrypted in the system (D) is not a valid response, as encryption does not depend on the compliance of the passwords. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 241; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 303.
What is the FIRST step in developing a security test and its evaluation?
Determine testing methods
Develop testing procedures
Identify all applicable security requirements
Identify people, processes, and products not in compliance
The first step in developing a security test and its evaluation is to identify all applicable security requirements. Security requirements are the specifications or criteria that define the security objectives, expectations, and needs of the system or network. Security requirements may be derived from various sources, such as business goals, user needs, regulatory standards, contractual obligations, or best practices. Identifying all applicable security requirements is essential to establish the scope, purpose, and criteria of the security test and its evaluation. Determining testing methods, developing testing procedures, and identifying people, processes, and products not in compliance are subsequent steps that should be done after identifying the security requirements, as they depend on the security requirements to be defined and agreed upon. References: : Security Testing - Overview : Security Testing - Planning
Which of the following is the BEST mitigation from phishing attacks?
Network activity monitoring
Security awareness training
Corporate policy and procedures
Strong file and directory permissions
Security awareness training is the process of educating users on the potential threats and risks they may face online, and the best practices and behaviors they should adopt to protect themselves and the organization2. Security awareness training is the best mitigation from phishing attacks, as it can help users recognize and avoid malicious emails, links, or attachments that may compromise their credentials, data, or devices. Network activity monitoring, corporate policy and procedures, and strong file and directory permissions are also important security measures, but they are not as effective as security awareness training in preventing phishing attacks, as they rely on technical controls rather than human factors. References: 2: CISSP For Dummies, 7th Edition, Chapter 2, page 33.
The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks provide
data integrity.
defense in depth.
data availability.
non-repudiation.
Defense in depth is a security strategy that involves applying multiple layers of protection to a system or network to prevent or mitigate attacks. The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks are examples of defense in depth measures that can enhance the security of the system or network.
A, C, and D are incorrect because they are not the best terms to describe the security strategy. Data integrity is a property of data that ensures its accuracy, consistency, and validity. Data availability is a property of data that ensures its accessibility and usability. Non-repudiation is a property of data that ensures its authenticity and accountability. While these properties are important for security, they are not the same as defense in depth.
Contingency plan exercises are intended to do which of the following?
Train personnel in roles and responsibilities
Validate service level agreements
Train maintenance personnel
Validate operation metrics
Contingency plan exercises are intended to train personnel in roles and responsibilities. Contingency plan exercises are simulated scenarios that test the preparedness and effectiveness of the contingency plan, which is a document that outlines the actions and procedures to be followed in the event of a disruption or disaster. Contingency plan exercises help to train the personnel involved in the contingency plan, such as the incident response team, the recovery team, and the business continuity team, in their roles and responsibilities, such as communication, coordination, decision making, and execution. Contingency plan exercises also help to identify and resolve any issues or gaps in the contingency plan, and to improve the skills and confidence of the personnel5 . References: 5: Contingency Plan Testing : Contingency Planning Guide for Federal Information Systems
Which one of the following is the MOST important in designing a biometric access system if it is essential that no one other than authorized individuals are admitted?
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Crossover Error Rate (CER)
Rejection Error Rate
The most important factor in designing a biometric access system if it is essential that no one other than authorized individuals are admitted is the False Acceptance Rate (FAR). FAR is the probability that a biometric system will incorrectly accept an unauthorized user or reject an authorized user2. FAR is a measure of the security or accuracy of the biometric system, and it should be as low as possible to prevent unauthorized access. False Rejection Rate (FRR), Crossover Error Rate (CER), and Rejection Error Rate are not as important as FAR, as they are related to the usability or convenience of the biometric system, rather than the security. FRR is the probability that a biometric system will incorrectly reject an authorized user or accept an unauthorized user. CER is the point where FAR and FRR are equal, and it is used to compare the performance of different biometric systems. Rejection Error Rate is the probability that a biometric system will fail to capture or process a biometric sample. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 95.
The overall goal of a penetration test is to determine a system's
ability to withstand an attack.
capacity management.
error recovery capabilities.
reliability under stress.
A penetration test is a simulated attack on a system or network, performed by authorized testers, to evaluate the security posture and identify vulnerabilities that could be exploited by malicious actors. The overall goal of a penetration test is to determine the system’s ability to withstand an attack, and to provide recommendations for improving the security controls and mitigating the risks12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 7572: CISSP For Dummies, 7th Edition, Chapter 7, page 233.
Who must approve modifications to an organization's production infrastructure configuration?
Technical management
Change control board
System operations
System users
A change control board (CCB) is a group of stakeholders who are responsible for reviewing, approving, and monitoring changes to an organization’s production infrastructure configuration. A production infrastructure configuration is the set of hardware, software, network, and environmental components that support the operation of an information system. Changes to the production infrastructure configuration can affect the security, performance, availability, and functionality of the system. Therefore, changes must be carefully planned, tested, documented, and authorized before implementation. A CCB ensures that changes are aligned with the organization’s objectives, policies, and standards, and that changes do not introduce any adverse effects or risks to the system or the organization. A CCB is not the same as technical management, system operations, or system users, who may be involved in the change management process, but do not have the authority to approve changes.
Which one of the following describes granularity?
Maximum number of entries available in an Access Control List (ACL)
Fineness to which a trusted system can authenticate users
Number of violations divided by the number of total accesses
Fineness to which an access control system can be adjusted
Granularity is the degree of detail or precision that an access control system can provide. A granular access control system can specify different levels of access for different users, groups, resources, or conditions. For example, a granular firewall can allow or deny traffic based on the source, destination, port, protocol, time, or other criteria
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
What maintenance activity is responsible for defining, implementing, and testing updates to application systems?
Program change control
Regression testing
Export exception control
User acceptance testing
Program change control is the maintenance activity that is responsible for defining, implementing, and testing updates to application systems. Program change control ensures that the changes are authorized, documented, reviewed, tested, and approved before being deployed to the production environment. Program change control also maintains a record of the changes and their impact on the system . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 823. : CISSP For Dummies, 7th Edition, Chapter 8, page 263.
A security consultant has been asked to research an organization's legal obligations to protect privacy-related information. What kind of reading material is MOST relevant to this project?
The organization's current security policies concerning privacy issues
Privacy-related regulations enforced by governing bodies applicable to the organization
Privacy best practices published by recognized security standards organizations
Organizational procedures designed to protect privacy information
The most relevant reading material for researching an organization’s legal obligations to protect privacy-related information is the privacy-related regulations enforced by governing bodies applicable to the organization. These regulations define the legal requirements, standards, and penalties for collecting, processing, storing, and disclosing personal or sensitive information of individuals or entities. The organization must comply with these regulations to avoid legal liabilities, fines, or sanctions. The other options are not as relevant as privacy-related regulations, as they either do not reflect the legal obligations of the organization (A and C), or do not apply to all types of privacy-related information (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 22; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31.
An advantage of link encryption in a communications network is that it
makes key management and distribution easier.
protects data from start to finish through the entire network.
improves the efficiency of the transmission.
encrypts all information, including headers and routing information.
An advantage of link encryption in a communications network is that it encrypts all information, including headers and routing information. Link encryption is a type of encryption that is applied at the data link layer of the OSI model, and encrypts the entire packet or frame as it travels from one node to another1. Link encryption can protect the confidentiality and integrity of the data, as well as the identity and location of the nodes. Link encryption does not make key management and distribution easier, as it requires each node to have a separate key for each link. Link encryption does not protect data from start to finish through the entire network, as it only encrypts the data while it is in transit, and decrypts it at each node. Link encryption does not improve the efficiency of the transmission, as it adds overhead and latency to the communication. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 419.
What principle requires that changes to the plaintext affect many parts of the ciphertext?
Diffusion
Encapsulation
Obfuscation
Permutation
Diffusion is the principle that requires that changes to the plaintext affect many parts of the ciphertext. Diffusion is a property of a good encryption algorithm that aims to spread the influence of each plaintext bit over many ciphertext bits, so that a small change in the plaintext results in a large change in the ciphertext2. Diffusion can increase the security of the encryption by making it harder for an attacker to analyze the statistical patterns or correlations between the plaintext and the ciphertext. Encapsulation, obfuscation, and permutation are not principles that require that changes to the plaintext affect many parts of the ciphertext, as they are related to different aspects of encryption or security. References: 2: CISSP For Dummies, 7th Edition, Chapter 3, page 65.
Which of the following defines the key exchange for Internet Protocol Security (IPSec)?
Secure Sockets Layer (SSL) key exchange
Internet Key Exchange (IKE)
Security Key Exchange (SKE)
Internet Control Message Protocol (ICMP)
Internet Key Exchange (IKE) is a protocol that defines the key exchange for Internet Protocol Security (IPSec). IPSec is a suite of protocols that provides security for IP-based communications, such as encryption, authentication, and integrity. IKE establishes a secure channel between two parties, negotiates the security parameters, and generates the cryptographic keys for IPSec . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 541. : CISSP For Dummies, 7th Edition, Chapter 5, page 157.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Copyright © 2014-2024 Certensure. All Rights Reserved