CAS-005 CompTIA SecurityX Exam Practice Questions
CAS-005 CompTIA SecurityX Exam Practice Questions
What's Inside:
Important Note:
For full access to the complete question bank and topic-wise explanations, visit:
CertQuestionsBank.com
FB page: https://www.facebook.com/certquestionsbank
Share some CAS-005 exam online questions below.
1.A security administrator needs to automate alerting. The server generates structured log files that
need to be parsed to determine whether an alarm has been triggered.
Given the following code function:
Which of the following is most likely the log input that the code will parse?
A)
B)
C)
D)
A. Option A
B. Option B
C. Option C
D. Option D
Answer: A
Explanation:
The code function provided in the question seems to be designed to parse JSON formatted logs to
check for an alarm state. Option A is a JSON format that matches the structure likely expected by the
code. The presence of the "error_log" and "InAlarmState" keys suggests that this is the correct input
format.
Reference: CompTIA SecurityX Study Guide, Chapter on Log Management and Automation, Section
on Parsing Structured Logs.
2.An organization mat performs real-time financial processing is implementing a new backup solution
Given the following business requirements?
* The backup solution must reduce the risk for potential backup compromise
* The backup solution must be resilient to a ransomware attack.
* The time to restore from backups is less important than the backup data integrity
* Multiple copies of production data must be maintained
Which of the following backup strategies best meets these requirement?
A. Creating a secondary, immutable storage array and updating it with live data on a continuous basis
B. Utilizing two connected storage arrays and ensuring the arrays constantly sync
C. Enabling remote journaling on the databases to ensure real-time transactions are mirrored
D. Setting up antitempering on the databases to ensure data cannot be changed unintentionally
Answer: A
Explanation:
A. Creating a secondary, immutable storage array and updating it with live data on a continuous
basis: An immutable storage array ensures that data, once written, cannot be altered or deleted. This
greatly reduces the risk of backup compromise and provides resilience against ransomware attacks,
as the ransomware cannot modify or delete the backup data. Maintaining multiple copies of
production data with an immutable storage solution ensures data integrity and compliance with the
requirement for multiple copies.
Other options:
B. Utilizing two connected storage arrays and ensuring the arrays constantly sync: While this ensures
data redundancy, it does not provide protection against ransomware attacks, as both arrays could be
compromised simultaneously.
C. Enabling remote journaling on the databases: This ensures real-time transaction mirroring but
does not address the requirement for reducing the risk of backup compromise or resilience to
ransomware.
D. Setting up anti-tampering on the databases: While this helps ensure data integrity, it does not
provide a comprehensive backup solution that meets all the specified requirements.
Reference: CompTIA Security+ Study Guide
NIST SP 800-209, "Security Guidelines for Storage Infrastructure"
"Immutable Backup Architecture" by Veeam
4.A global manufacturing company has an internal application mat is critical to making products This
application cannot be updated and must Be available in the production area A security architect is
implementing security for the application.
Which of the following best describes the action the architect should take-?
A. Disallow wireless access to the application.
B. Deploy Intrusion detection capabilities using a network tap
C. Create an acceptable use policy for the use of the application
D. Create a separate network for users who need access to the application
Answer: D
Explanation:
Creating a separate network for users who need access to the application is the best action to secure
an internal application that is critical to the production area and cannot be updated.
Why Separate Network?
Network Segmentation: Isolates the critical application from the rest of the network, reducing the risk
of compromise and limiting the potential impact of any security incidents.
Controlled Access: Ensures that only authorized users have access to the application, enhancing
security and reducing the attack surface.
Minimized Risk: Segmentation helps in protecting the application from vulnerabilities that could be
exploited from other parts of the network.
Other options, while beneficial, do not provide the same level of security for a critical application:
A. Disallow wireless access: Useful but does not provide comprehensive protection.
B. Deploy intrusion detection capabilities using a network tap: Enhances monitoring but does not
provide the same level of isolation and control.
C. Create an acceptable use policy: Important for governance but does not provide technical security
controls.
Reference: CompTIA SecurityX Study Guide
NIST Special Publication 800-125, "Guide to Security for Full Virtualization Technologies"
"Network Segmentation Best Practices," Cisco Documentation
5.A company recently experienced a ransomware attack. Although the company performs systems
and data backup on a schedule that aligns with its RPO (Recovery Point Objective) requirements, the
backup administrator could not recover critical systems and data from its offline backups to meet the
RPO. Eventually, the systems and data were restored with information that was six months outside of
RPO requirements.
Which of the following actions should the company take to reduce the risk of a similar attack?
A. Encrypt and label the backup tapes with the appropriate retention schedule before they are sent to
the off-site location.
B. Implement a business continuity process that includes reverting manual business processes.
C. Perform regular disaster recovery testing of IT and non-IT systems and processes.
D. Carry out a tabletop exercise to update and verify the RACI matrix with IT and critical business
functions.
Answer: C
Explanation:
Comprehensive and Detailed
Understanding the Ransomware Issue:
The key issue here is that backups were not recoverable within the required RPO timeframe.
This means the organization did not properly test its backup and disaster recovery (DR) processes.
To prevent this from happening again, regular disaster recovery testing is essential.
Why Option C is Correct:
Disaster recovery testing ensures that backups are functional and can meet business continuity
needs.
Frequent DR testing allows organizations to identify and fix gaps in recovery strategies.
Regular testing ensures that recovery meets the RPO & RTO (Recovery Time Objective)
requirements.
Why Other Options Are Incorrect:
A (Encrypt & label backup tapes): While encryption is important, it does not address the failure to
meet RPO requirements.
B (Reverting to manual business processes): While a manual continuity plan is good for resilience, it
does not resolve the backup and recovery failure.
D (Tabletop exercise & RACI matrix): A tabletop exercise is a planning activity, but it does not involve
actual recovery testing.
Reference: CompTIA SecurityX CAS-005 Official Study Guide: Disaster Recovery & Business
Continuity Planning
NIST SP 800-34: Contingency Planning Guide for Information Systems
ISO 22301: Business Continuity Management Standards
7.A company migrating to a remote work model requires that company-owned devices connect to a
VPN before logging in to the device itself. The VPN gateway requires that a specific key extension is
deployed to the machine certificates in the internal PKI.
Which of the following best explains this requirement?
A. The certificate is an additional factor to meet regulatory MFA requirements for VPN access.
B. The VPN client selected the certificate with the correct key usage without user interaction.
C. The internal PKI certificate deployment allows for Wi-Fi connectivity before logging in to other
systems.
D. The server connection uses SSL VPN, which uses certificates for secure communication.
Answer: B
Explanation:
Comprehensive and Detailed
This scenario describes an enterprise VPN setup that requires machine authentication before a user
logs in. The best explanation for this requirement is that the VPN client selects the appropriate
certificate automatically based on the key extension in the machine certificate. Understanding the Key
Extension Requirement:
PKI (Public Key Infrastructure) issues machine certificates that include specific key usages such as
Client Authentication or IPSec IKE Intermediate.
Key usage extensions define how a certificate can be used, ensuring that only valid certificates are
selected by the VPN client.
Why Option B is Correct:
The VPN automatically selects the correct machine certificate with the appropriate key extension. The
process occurs without user intervention, ensuring seamless VPN authentication before login.
Why Other Options Are Incorrect:
A (MFA requirement): Certificates used in this scenario are for machine authentication, not user MFA.
MFA typically involves user credentials plus a second factor (like OTPs or biometrics), which is not
applicable here.
C (Wi-Fi connectivity before login): This refers to pre-logon networking, which is a separate concept
where devices authenticate to a Wi-Fi network before login, usually via 802.1X EAP-TLS. However,
this question specifically mentions VPN authentication, not Wi-Fi authentication.
D (SSL VPN with certificates): While SSL VPNs do use certificates, this scenario involves machine
certificates issued by an internal PKI, which are commonly used in IPSec VPNs, not SSL VPNs.
Reference: CompTIA SecurityX CAS-005 Official Study Guide: Section on Machine Certificate
Authentication in VPNs
NIST SP 800-53: Guidelines on authentication mechanisms
RFC 5280: Internet X.509 Public Key Infrastructure Certificate and CRL Profile
9.A company wants to invest in research capabilities with the goal to operationalize the research
output.
Which of the following is the best option for a security architect to recommend?
A. Dark web monitoring
B. Threat intelligence platform
C. Honeypots
D. Continuous adversary emulation
Answer: B
Explanation:
Investing in a threat intelligence platform is the best option for a company looking to operationalize
research output. A threat intelligence platform helps in collecting, processing, and analyzing threat
data to provide actionable insights. These platforms integrate data from various sources, including
dark web monitoring, honeypots, and other security tools, to offer a comprehensive view of the threat
landscape.
Why a Threat Intelligence Platform?
Data Integration: It consolidates data from multiple sources, including dark web monitoring and
honeypots, making it easier to analyze and derive actionable insights.
Actionable Insights: Provides real-time alerts and reports on potential threats, helping the organization
take proactive measures.
Operational Efficiency: Streamlines the process of threat detection and response, allowing the
security team to focus on critical issues.
Research and Development: Facilitates the operationalization of research output by providing a
platform for continuous monitoring and analysis of emerging threats.
Other options, while valuable, do not offer the same level of integration and operationalization
capabilities:
A. Dark web monitoring: Useful for specific threat intelligence but lacks comprehensive
operationalization.
C. Honeypots: Effective for detecting and analyzing specific attack vectors but not for broader threat
intelligence.
D. Continuous adversary emulation: Important for testing defenses but not for integrating and
operationalizing threat intelligence.
Reference: CompTIA SecurityX Study Guide
"Threat Intelligence Platforms," Gartner Research
NIST Special Publication 800-150, "Guide to Cyber Threat Information Sharing"
10.A company detects suspicious activity associated with external connections Security detection
tools are unable to categorize this activity.
Which of the following is the best solution to help the company overcome this challenge?
A. Implement an Interactive honeypot
B. Map network traffic to known loCs.
C. Monitor the dark web
D. implement UEBA
Answer: D
Explanation:
User and Entity Behavior Analytics (UEBA) is the best solution to help the company overcome
challenges associated with suspicious activity that cannot be categorized by traditional detection
tools. UEBA uses advanced analytics to establish baselines of normal behavior for users and entities
within the network. It then identifies deviations from these baselines, which may indicate malicious
activity. This approach is particularly effective for detecting unknown threats and sophisticated attacks
that do not match known indicators of compromise (IoCs).
Reference: CompTIA SecurityX Study Guide, Chapter on Advanced Threat Detection and Mitigation,
Section on User and Entity Behavior Analytics (UEBA).
11.During a forensic review of a cybersecurity incident, a security engineer collected a portion of the
payload used by an attacker on a comprised web server.
Given the following portion of the code:
12.Recent repents indicate that a software tool is being exploited Attackers were able to bypass user
access controls and load a database. A security analyst needs to find the vulnerability and
recommend a mitigation.
The analyst generates the following output:
Which of the following would the analyst most likely recommend?
A. Installing appropriate EDR tools to block pass-the-hash attempts
B. Adding additional time to software development to perform fuzz testing
C. Removing hard coded credentials from the source code
D. Not allowing users to change their local passwords
Answer: C
Explanation:
The output indicates that the software tool contains hard-coded credentials, which attackers can
exploit to bypass user access controls and load the database. The most likely recommendation is to
remove hard-coded credentials from the source code.
Here’s why:
Security Best Practices: Hard-coded credentials are a significant security risk because they can be
easily discovered through reverse engineering or simple inspection of the code. Removing them
reduces the risk of unauthorized access.
Credential Management: Credentials should be managed securely using environment variables,
secure vaults, or configuration management tools that provide encryption and access controls.
Mitigation of Exploits: By eliminating hard-coded credentials, the organization can prevent attackers
from easily bypassing authentication mechanisms and gaining unauthorized access to sensitive
systems.
Reference: CompTIA Security+ SY0-601 Study Guide by Mike Chapple and David Seidl OWASP Top
Ten: Insecure Design
NIST Special Publication 800-53: Security and Privacy Controls for Information Systems and
Organizations
13.An organization has been using self-managed encryption keys rather than the free keys managed
by the cloud provider. The Chief Information Security Officer (CISO) reviews the monthly bill and
realizes the self-managed keys are more costly than anticipated.
Which of the following should the CISO recommend to reduce costs while maintaining a strong
security posture?
A. Utilize an on-premises HSM to locally manage keys.
B. Adjust the configuration for cloud provider keys on data that is classified as public.
C. Begin using cloud-managed keys on all new resources deployed in the cloud.
D. Extend the key rotation period to one year so that the cloud provider can use cached keys.
Answer: B
Explanation:
Comprehensive and Detailed Step by Step
Understanding the Scenario: The organization is using customer-managed encryption keys in the
cloud, which is more expensive than using the cloud provider's free managed keys. The CISO needs
to find a way to reduce costs without significantly weakening the security posture. Analyzing the
Answer Choices:
A. Utilize an on-premises HSM to locally manage keys: While on-premises HSMs offer strong
security, they introduce additional costs and complexity (procurement, maintenance, etc.). This option
is unlikely to reduce costs compared to cloud-based key management.
B. Adjust the configuration for cloud provider keys on data that is classified as public: This is the most
practical and cost-effective approach. Data classified as public doesn't require the same level of
protection as sensitive data. Using the cloud provider's free managed keys for public data can
significantly reduce costs without compromising security, as the data is intended to be publicly
accessible anyway.
Reference: This aligns with the principle of applying security controls based on data classification and
risk assessment, a key concept in CASP+.
C. Begin using cloud-managed keys on all new resources deployed in the cloud: While this would
reduce costs, it's a broad approach that doesn't consider the sensitivity of the data. Applying cloud-
managed keys to sensitive data might not be acceptable from a security standpoint.
D. Extend the key rotation period to one year so that the cloud provider can use cached keys:
Extending the key rotation period weakens security. Frequent key rotation is a security best practice
to limit the impact of a potential key compromise.
Reference: Key rotation is a fundamental security control, and reducing its frequency goes against
CASP+ principles related to cryptography and risk management.
Why B is the Correct Answer:
Risk-Based Approach: Using cloud-provider-managed keys for public data is a reasonable risk-based
decision. Public data, by definition, is not confidential.
Cost Optimization: This directly addresses the CISO's concern about cost, as cloud-provider-
managed keys are often free or significantly cheaper.
Security Balance: It maintains a strong security posture for sensitive data by continuing to use
customer-managed keys where appropriate, while optimizing costs for less sensitive data. CASP+
Relevance: This approach demonstrates an understanding of risk management, data classification,
and cost-benefit analysis in security decision-making, all of which are important topics in CASP+.
Elaboration on Data Classification:
Data Classification Policy: Organizations should have a clear data classification policy that defines
different levels of data sensitivity (e.g., public, internal, confidential, restricted).
Security Controls Based on Classification: Security controls, including encryption key management,
should be applied based on the data's classification level.
Cost-Benefit Analysis: Data classification helps organizations make informed decisions about where
to invest in stronger security controls and where cost optimization is acceptable.
In conclusion, adjusting the configuration to use cloud-provider-managed keys for data classified as
public is the most effective way to reduce costs while maintaining a strong security posture. It's a
practical, risk-based approach that aligns with data classification principles and cost-benefit
considerations, all of which are important concepts covered in the CASP+ exam objectives.
14. A natural disaster may disrupt operations at Site A, which would then cause an evacuation. Users
are unable to log into the domain from-their workstations after relocating to Site B.
15.A security engineer needs 10 secure the OT environment based on me following requirements
• Isolate the OT network segment
• Restrict Internet access.
• Apply security updates two workstations
• Provide remote access to third-party vendors
Which of the following design strategies should the engineer implement to best meet these
requirements?
A. Deploy a jump box on the third party network to access the OT environment and provide updates
using a physical delivery method on the workstations
B. Implement a bastion host in the OT network with security tools in place to monitor access and use
a dedicated update server for the workstations.
C. Enable outbound internet access on the OT firewall to any destination IP address and use the
centralized update server for the workstations
D. Create a staging environment on the OT network for the third-party vendor to access and enable
automatic updates on the workstations.
Answer: B
Explanation:
To secure the Operational Technology (OT) environment based on the given requirements, the best
approach is to implement a bastion host in the OT network. The bastion host serves as a secure entry
point for remote access, allowing third-party vendors to connect while being monitored by security
tools. Using a dedicated update server for workstations ensures that security updates are applied in a
controlled manner without direct internet access.
Reference: CompTIA SecurityX Study Guide: Recommends the use of bastion hosts and dedicated
update servers for securing OT environments.
NIST Special Publication 800-82, "Guide to Industrial Control Systems (ICS) Security": Advises on
isolating OT networks and using secure remote access methods.
"Industrial Network Security" by Eric
D. Knapp and Joel Thomas Langill: Discusses strategies for securing OT networks, including the use
of bastion hosts and update servers.
16.Which of the following best describes the challenges associated with widespread adoption of
homomorphic encryption techniques?
A. Incomplete mathematical primitives
B. No use cases to drive adoption
C. Quantum computers not yet capable
D. insufficient coprocessor support
Answer: D
Explanation:
Homomorphic encryption allows computations to be performed on encrypted data without decrypting
it, providing strong privacy guarantees.
However, the adoption of homomorphic encryption is challenging due to several factors:
A. Incomplete mathematical primitives: This is not the primary barrier as the theoretical foundations of
homomorphic encryption are well-developed.
B. No use cases to drive adoption: There are several compelling use cases for homomorphic
encryption, especially in privacy-sensitive fields like healthcare and finance.
C. Quantum computers not yet capable: Quantum computing is not directly related to the challenges
of adopting homomorphic encryption.
D. Insufficient coprocessor support: The computational overhead of homomorphic encryption is
significant, requiring substantial processing power. Current general-purpose processors are not
optimized for the intensive computations required by homomorphic encryption, limiting its practical
deployment. Specialized hardware or coprocessors designed to handle these computations more
efficiently are not yet widely available.
Reference: CompTIA Security+ Study Guide
"Homomorphic Encryption: Applications and Challenges" by Rivest et al.
NIST, "Report on Post-Quantum Cryptography"
17.A software company deployed a new application based on its internal code repository Several
customers are reporting anti-malware alerts on workstations used to test the application.
Which of the following is the most likely cause of the alerts?
A. Misconfigured code commit
B. Unsecure bundled libraries
C. Invalid code signing certificate
D. Data leakage
Answer: B
Explanation:
The most likely cause of the anti-malware alerts on customer workstations is unsecure bundled
libraries. When developing and deploying new applications, it is common for developers to use third-
party libraries. If these libraries are not properly vetted for security, they can introduce vulnerabilities
or malicious code.
Why Unsecure Bundled Libraries?
Third-Party Risks: Using libraries that are not secure can lead to malware infections if the libraries
contain malicious code or vulnerabilities.
Code Dependencies: Libraries may have dependencies that are not secure, leading to potential
security risks.
Common Issue: This is a frequent issue in software development where libraries are used for
convenience but not properly vetted for security.
Other options, while relevant, are less likely to cause widespread anti-malware alerts:
A. Misconfigured code commit: Could lead to issues but less likely to trigger anti-malware alerts.
C. Invalid code signing certificate: Would lead to trust issues but not typically anti-malware alerts.
D. Data leakage: Relevant for privacy concerns but not directly related to anti-malware alerts.
Reference: CompTIA SecurityX Study Guide
"Securing Open Source Libraries," OWASP
"Managing Third-Party Software Security Risks," Gartner Research
18.A security analyst discovered requests associated with IP addresses known for born legitimate
3nd bot-related traffic.
Which of the following should the analyst use to determine whether the requests are malicious?
A. User-agent string
B. Byte length of the request
C. Web application headers
D. HTML encoding field
Answer: A
Explanation:
The user-agent string can provide valuable information to distinguish between legitimate and bot-
related traffic. It contains details about the browser, device, and sometimes the operating system of
the client making the request.
Why Use User-Agent String?
Identify Patterns: User-agent strings can help identify patterns that are typical of bots or legitimate
users.
Block Malicious Bots: Many bots use known user-agent strings, and identifying these can help block
malicious requests.
Anomalies Detection: Anomalous user-agent strings can indicate spoofing attempts or malicious
activity.
Other options provide useful information but may not be as effective for initial determination of the
nature of the request:
B. Byte length of the request: This can indicate anomalies but does not provide detailed information
about the client.
C. Web application headers: While useful, they may not provide enough distinction between
legitimate and bot traffic.
D. HTML encoding field: This is not typically used for identifying the nature of the request.
Reference: CompTIA SecurityX Study Guide
"User-Agent Analysis for Security," OWASP
NIST Special Publication 800-94, "Guide to Intrusion Detection and Prevention Systems (IDPS)"
19.A Chief Information Security Officer is concerned about the operational impact of ransomware. In
the event of a ransomware attack, the business requires the integrity of the data to remain intact and
an RPO of less than one hour.
Which of the following storage strategies best satisfies the business requirements?
A. Full disk encryption
B. Remote journaling
C. Immutable
D. RAID 10
Answer: B
Explanation:
Remote journaling continuously sends log updates to a remote system, ensuring near-real-time
backup and an RPO (Recovery Point Objective) under one hour. Key concepts:
RPO under one hour means minimal data loss.
Remote journaling provides rapid recovery by keeping near-live backups.
Other options:
A (Full disk encryption) protects against unauthorized access but does not aid recovery.
C (Immutable storage) prevents modification but does not ensure real-time backups.
D (RAID 10) improves redundancy but does not help against ransomware.
Reference: CASP+ CAS-005 C Business Continuity and Disaster Recovery Planning
20.A systems administrator works with engineers to process and address vulnerabilities as a result of
continuous scanning activities. The primary challenge faced by the administrator is differentiating
between valid and invalid findings.
Which of the following would the systems administrator most likely verify is properly configured?
A. Report retention time
B. Scanning credentials
C. Exploit definitions
D. Testing cadence
Answer: B
Explanation:
When differentiating between valid and invalid findings from vulnerability scans, the systems
administrator should verify that the scanning credentials are properly configured. Valid credentials
ensure that the scanner can authenticate and access the systems being evaluated, providing
accurate and comprehensive results. Without proper credentials, scans may miss vulnerabilities or
generate false positives, making it difficult to prioritize and address the findings effectively.
Reference: CompTIA SecurityX Study Guide: Highlights the importance of using valid credentials for
accurate vulnerability scanning.
"Vulnerability Management" by Park Foreman: Discusses the role of scanning credentials in obtaining
accurate scan results and minimizing false positives.
"The Art of Network Security Monitoring" by Richard Bejtlich: Covers best practices for configuring
and using vulnerability scanning tools, including the need for valid credentials.
21.A company wants to modify its process to comply with privacy requirements after an incident
involving PII data in a development environment. In order to perform functionality tests, the QA team
still needs to use valid data in the specified format.
Which of the following best addresses the risk without impacting the development life cycle?
A. Encrypting the data before moving into the QA environment
B. Truncating the data to make it not personally identifiable
C. Using a large language model to generate synthetic data
D. Utilizing tokenization for sensitive fields
Answer: D
Explanation:
Tokenization replaces sensitive data (e.g., PII) with non-sensitive placeholders while maintaining
format consistency, ensuring compliance without disrupting testing. This method is commonly used
for PCI-DSS and GDPR compliance while preserving data structure for functional tests.
Encryption (A) secures data but does not remove sensitivity or solve testing concerns. Truncation (B)
removes portions of data but may impact testing if format requirements are strict. Synthetic data (C)
can be useful but may not always match real-world scenarios perfectly for testing purposes.
Reference: CompTIA SecurityX (CAS-005) Exam Objectives - Domain 1.0 (Governance, Risk, and
Compliance), Section on Privacy Risk Considerations & Data Protection
22.As part of a security audit in the software development life cycle, a product manager must
demonstrate and provide evidence of a complete representation of the code and modules used within
the production-deployed application prior to the build.
Which of the following best provides the required evidence?
A. Software composition analysis
B. Runtime application inspection
C. Static application security testing
D. Interactive application security testing
Answer: A
Explanation:
Software Composition Analysis (SCA) is the best method for identifying all components,
dependencies, and open-source libraries used in an application. It ensures that organizations track
and manage vulnerabilities in third-party code before deployment.
SCA tools generate a Software Bill of Materials (SBOM), which provides a full representation of the
code and modules used in the application.
Other options:
Static Application Security Testing (SAST) (C) checks for vulnerabilities but does not map
dependencies.
Interactive Application Security Testing (IAST) (D) works at runtime, not before deployment. Runtime
Application Self-Protection (RASP) (B) works while the application is running.
Reference: CASP+ CAS-005 Official Study Guide C Chapter on Secure Software Development
24.A security analyst wants to use lessons learned from a poor incident response to reduce dwell lime
in the future The analyst is using the following data points
Which of the following would the analyst most likely recommend?
A. Adjusting the SIEM to alert on attempts to visit phishing sites
B. Allowing TRACE method traffic to enable better log correlation
C. Enabling alerting on all suspicious administrator behavior
D. utilizing allow lists on the WAF for all users using GFT methods
Answer: C
Explanation:
In the context of improving incident response and reducing dwell time, the security analyst needs to
focus on proactive measures that can quickly detect and alert on potential security breaches.
Here’s a detailed analysis of the options provided:
A. Adjusting the SIEM to alert on attempts to visit phishing sites: While this is a useful measure to
prevent phishing attacks, it primarily addresses external threats and doesn’t directly impact dwell
time reduction, which focuses on the time a threat remains undetected within a network.
B. Allowing TRACE method traffic to enable better log correlation: The TRACE method in HTTP is
used for debugging purposes, but enabling it can introduce security vulnerabilities. It’s not typically
recommended for enhancing security monitoring or incident response.
C. Enabling alerting on all suspicious administrator behavior: This option directly targets the potential
misuse of administrator accounts, which are often high-value targets for attackers. By monitoring and
alerting on suspicious activities from admin accounts, the organization can quickly identify and
respond to potential breaches, thereby reducing dwell time significantly. Suspicious behavior could
include unusual login times, access to sensitive data not usually accessed by the admin, or any
deviation from normal behavior patterns. This proactive monitoring is crucial for quick detection and
response, aligning well with best practices in incident response.
D. Utilizing allow lists on the WAF for all users using GET methods: This measure is aimed at
restricting access based on allowed lists, which can be effective in preventing unauthorized access
but doesn’t specifically address the need for quick detection and response to internal threats.
Reference: CompTIA SecurityX Study Guide: Emphasizes the importance of monitoring and alerting
on admin activities as part of a robust incident response plan.
NIST Special Publication 800-61 Revision 2, "Computer Security Incident Handling Guide": Highlights
best practices for incident response, including the importance of detecting and responding to
suspicious activities quickly.
"Incident Response & Computer Forensics" by Jason T. Luttgens, Matthew Pepe, and Kevin Mandia:
Discusses techniques for reducing dwell time through effective monitoring and alerting mechanisms,
particularly focusing on privileged account activities.
By focusing on enabling alerting for suspicious administrator behavior, the security analyst addresses
a critical area that can help reduce the time a threat goes undetected, thereby improving the overall
security posture of the organization.
Top of Form
Bottom of Form
25.Operational technology often relies upon aging command, control, and telemetry subsystems that
were created with the design assumption of:
A. operating in an isolated/disconnected system.
B. communicating over distributed environments
C. untrustworthy users and systems being present.
D. an available EtherneVIP network stack for flexibility.
E. anticipated eavesdropping from malicious actors.
Answer: A
Explanation:
Comprehensive and Detailed Step by Step
Understanding the Scenario: The problem is legitimate bot traffic overloading the web server, causing
performance issues. The goal is to mitigate this without adding more server resources. Analyzing the
Answer Choices:
A. Block all bot traffic using the IPS: This is too drastic. Blocking all bot traffic can negatively impact
legitimate bots, like search engine crawlers, which are important for SEO.
Reference: While IPS (Intrusion Prevention System) is a valuable security tool, blocking legitimate
traffic is generally undesirable. CASP+ emphasizes understanding the business impact of security
decisions.
B. Monitor legitimate SEO bot traffic for abnormalities: Monitoring is good practice, but it doesn't
actively solve the performance issue caused by the legitimate bots.
Reference: Monitoring aligns with CASP+ objectives related to threat and vulnerability management,
but it's a passive approach in this particular case.
C. Configure the WAF to rate-limit bot traffic: Rate limiting is a good option, but it might be too
aggressive if not carefully tuned. It could still impact the legitimate bots' ability to function correctly. A
WAF is better used to identify and block malicious traffic.
Reference: WAFs (Web Application Firewalls) are a key topic in CASP+. However, the question
states the bot traffic is legitimate, making rate-limiting a less-than-ideal solution initially.
D. Update robots.txt to slow down the crawling speed: This is the most appropriate solution. The
robots.txt file is a standard used by websites to communicate with web crawlers (bots). It can specify
which parts of the site should not be crawled and, crucially in this case, suggest a crawl delay.
Reference: The proper use of robots.txt is a common web security practice covered in CASP+
material. It's the least intrusive method to manage the behavior of legitimate bots.
Why D is the Correct Answer
robots.txt provides a way to politely request that well-behaved bots reduce their crawling speed. The
Crawl-delay directive can be used to specify a delay (in seconds) between successive requests.
This approach directly addresses the performance issue by reducing the load caused by the bots
without completely blocking them or requiring complex WAF configurations.
CASP+ Relevance: This solution aligns with the CASP+ focus on understanding and applying web
application security best practices, managing risks associated with web traffic, and choosing
appropriate controls based on specific scenarios.
How it works (elaboration based on web standards and security practices)
robots.txt: This file is placed in the root directory of a website.
Crawl-delay directive: Crawl-delay: 10 would suggest a 10-second delay between requests.
Respectful Bots: Legitimate search engine crawlers (like Googlebot) are designed to respect the
directives in robots.txt.
In conclusion, updating the robots.txt file to slow down the crawling speed is the best solution in this
scenario because it directly addresses the issue of aggressive bot traffic causing performance
problems without blocking legitimate bots or requiring significant configuration changes. It is a
targeted and appropriate solution aligned with web security principles and CASP+ objectives.
Get CAS-005 exam dumps full version.