An administrator has connected 100 users to multiple Files shares to perform read and write activity. The administrator needs to view audit trails in File Analytics of these 100 users. From which two Audit Trail options can the administrator choose to satisfy this task? (Choose two.)
Share Name
Client IP
Directory
Folders
Nutanix File Analytics, part of Nutanix Unified Storage (NUS), provides audit trails to track user activities within Nutanix Files shares. Audit trails include details such as who accessed a file, from where, and what actions were performed. The administrator needs to view the audit trails for 100 users, which requires filtering or grouping the audit data by relevant criteria.
Analysis of Options:
Option A (Share Name): Correct. Audit trails in File Analytics can be filtered by Share Name, allowing the administrator to view activities specific to a particular share. Since the 100 users are connected to multiple shares, filtering by Share Name helps narrow down the audit trails to the shares being accessed by these users, making it easier to analyze their activities.
Option B (Client IP): Correct. File Analytics audit trails include the Client IP address from which a user accesses a share (as noted in Question 14). Filtering by Client IP allows the administrator to track the activities of users based on their IP addresses, which can be useful if the 100 users are accessing shares from known IPs, helping to identify their read/write activities.
Option C (Directory): Incorrect. While audit trails track file and directory-level operations, “Directory” is not a standard filter option in File Analytics audit trails. The audit trails can show activities within directories, but the primary filtering options are more granular (e.g., by file) or higher-level (e.g., by share).
Option D (Folders): Incorrect. Similar to “Directory,” “Folders” is not a standard filter option in File Analytics audit trails. While folder-level activities are logged, the audit trails are typically filtered by Share Name, Client IP, or specific files, not by a generic “Folders” category.
Selected Options:
A: Filtering by Share Name allows the administrator to focus on the specific shares accessed by the 100 users.
B: Filtering by Client IP enables tracking user activities based on their IP addresses, which is useful for identifying the 100 users’ actions across multiple shares.
Exact Extract from Nutanix Documentation:
From the Nutanix File Analytics Administration Guide (available on the Nutanix Portal):
“File Analytics Audit Trails allow administrators to filter user activities by various criteria, including Share Name and Client IP. Filtering by Share Name enables viewing activities on a specific share, while filtering by Client IP helps track user actions based on their source IP address.”
Which error logs should the administrator be reviewing to determine why the relates service is down?
Solver.log
Arithmos.ERROR
Cerebro.ERROR
Tcpkill.log
The error log that the administrator should review to determine why the relay service is down is Cerebro.ERROR. Cerebro is a service that runs on each FSVM and provides relay functionality for Data Lens. Relay service is responsible for collecting metadata and statistics from FSVMs and sending them to Data Lens via HTTPS. If Cerebro.ERROR log shows any errors or exceptions related to relay service, it can indicate that relay service is down or not functioning properly. References: Nutanix Files Administration Guide, page 23; Nutanix Data Lens User Guide
An administrator is required to place all iSCSI traffic on an isolated network. How can the administrator meet this requirement?
Create a new network interface on the CVMs via ncli.
Configure the Data Services IP on an isolated network.
Configure network segmentation for Volumes.
Create a Volumes network in Prism Central.
Nutanix Volumes, part of Nutanix Unified Storage (NUS), provides block storage services via iSCSI to external hosts, such as physical servers. The iSCSI traffic is managed by the Controller VMs (CVMs) in the Nutanix cluster, and a virtual IP address called the Data Services IP is used for iSCSI communication. To isolate iSCSI traffic on a dedicated network, the administrator must ensure that this traffic is routed over the isolated network.
Analysis of Options:
Option A (Create a new network interface on the CVMs via ncli): Incorrect. While it’s possible to create additional network interfaces on CVMs using the ncli command-line tool, this is not the recommended or standard method for isolating iSCSI traffic. The Data Services IP is the primary mechanism for managing iSCSI traffic, and it can be assigned to an isolated network without creating new interfaces on each CVM.
Option B (Configure the Data Services IP on an isolated network): Correct. The Data Services IP (also known as the iSCSI Data Services IP) is a cluster-wide virtual IP used for iSCSI traffic. By configuring the Data Services IP to use an IP address on the isolated network (e.g., a specific VLAN or subnet dedicated to iSCSI), the administrator ensures that all iSCSI traffic is routed over that network, meeting the requirement for isolation. This configuration is done in Prism Element under the cluster’s iSCSI settings.
Option C (Configure network segmentation for Volumes): Incorrect. Network segmentation in Nutanix typically refers to isolating traffic using VLANs or separate subnets, which is indirectly achieved by configuring the Data Services IP (option B). However, “network segmentation for Volumes” is not a specific feature or configuration step in Nutanix; the correct approach is to assign the Data Services IP to the isolated network, which inherently segments the traffic.
Option D (Create a Volumes network in Prism Central): Incorrect. Prism Central is used for centralized management of multiple clusters, but the configuration of iSCSI traffic (e.g., the Data Services IP) is performed at the cluster level in Prism Element, not Prism Central. There is no concept of a “Volumes network” in Prism Central for this purpose.
Why Option B?
The Data Services IP is the key configuration for iSCSI traffic in a Nutanix cluster. By assigning this IP to an isolated network (e.g., a dedicated VLAN or subnet), the administrator ensures that all iSCSI traffic is routed over that network, achieving the required isolation. This is a standard and recommended approach in Nutanix for isolating iSCSI traffic.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“To isolate iSCSI traffic on a dedicated network, configure the Data Services IP with an IP address on the isolated network. This ensures that all iSCSI traffic between external hosts and the Nutanix cluster is routed over the specified network, providing network isolation as required.”
Before upgrading Files or creating a file server, which component must first be upgraded to a compatible version?
FSM
File Analytics
Prism Central
FSVM
The component that must first be upgraded to a compatible version before upgrading Files or creating a file server is Prism Central. Prism Central is a web-based user interface that allows administrators to manage multiple Nutanix clusters and services, including Files. Prism Central must be upgraded to a compatible version with Files before upgrading an existing file server or creating a new file server. Otherwise, the upgrade or creation process may fail or cause unexpected errors. References: Nutanix Files Administration Guide, page 21; Nutanix Files Upgrade Guide
An administrator is tasked with performing an upgrade to the latest Objects version.
What should the administrator do prior to upgrade Objects Manager?
Upgrade Lifecycle Manager
Upgrade MSP
Upgrade Objects service
Upgrade AOS
Before upgrading Objects Manager, the administrator must upgrade AOS to the latest version. AOS is the core operating system that runs on each node in a Nutanix cluster and provides the foundation for Objects Manager and Objects service. Upgrading AOS will ensure compatibility and stability for Objects components. References: Nutanix Objects Administration Guide, Acropolis Operating System Upgrade Guide
An administrator is tasked with deploying a Microsoft Server Failover Cluster for a critical application that uses shared storage.
The failover cluster instance will consist of VMs running on an AHV-hosted cluster and bare metal servers for maximum resiliency.
What should the administrator do to satisfy this requirement?
Create a Bucket with Objects.
Provision a Volume Group with Volume.
Create an SMB Share with Files.
Provision a new Storage Container.
Nutanix Volumes allows administrators to provision a volume group with one or more volumes that can be attached to multiple VMs or physical servers via iSCSI. This enables the creation of a Microsoft Server Failover Cluster that uses shared storage for a critical application.
Microsoft Server Failover Cluster typically uses shared block storage for its quorum disk and application data. Nutanix Volumes provides this via iSCSI by provisioning a Volume Group, which can be accessed by both the AHV-hosted VMs and bare metal servers. This setup ensures maximum resiliency, as the shared storage is accessible to all nodes in the cluster, allowing failover between VMs and bare metal servers as needed.
Exact Extract from Nutanix Documentation:
From the Nutanix Volumes Administration Guide (available on the Nutanix Portal):
“Nutanix Volumes provides block storage via iSCSI, which is ideal for Microsoft Server Failover Clusters requiring shared storage. To deploy an MSFC with VMs and bare metal servers, provision a Volume Group in Nutanix Volumes and expose it via iSCSI to all cluster nodes, ensuring shared access to the storage for high availability and failover.”
An administrator wants to provide security against ransomware attacks in Files. The administrator wants to configure the environment to scan files for ransomware in real time and provide notification in the event of a ransomware attack. Which component should the administrator use to meet this requirement?
Protection Domain
File Analytics
Syslog Server
Files Console
Nutanix Files, part of Nutanix Unified Storage (NUS), can be protected against ransomware attacks using integrated tools. The administrator’s requirement to scan files for ransomware in real time and provide notifications involves a component that can monitor file activity, detect anomalies, and alert administrators.
Analysis of Options:
Option A (Protection Domain): Incorrect. Protection Domains in Nutanix are used for disaster recovery (DR) of VMs and Volume Groups, not for ransomware protection in Nutanix Files. Files uses replication policies (e.g., NearSync) for DR, and Protection Domains are not relevant for real-time ransomware scanning or notifications.
Option B (File Analytics): Correct. Nutanix File Analytics, integrated with Nutanix Files, provides real-time monitoring and anomaly detection for ransomware protection. It scans file activity, uses machine learning to detect unusual patterns (e.g., mass file deletions, encryption), and sends notifications to administrators in the event of a potential ransomware attack. This meets the requirement for real-time scanning and notification (as seen in Question 7, where anomaly alerts were configured in File Analytics).
Option C (Syslog Server): Incorrect. A Syslog Server can receive logs from Nutanix Files (as noted in Question 9), including alerts, but it is not a component that scans files for ransomware in real time. It is a passive logging tool and does not provide active ransomware detection or notification capabilities.
Option D (Files Console): Incorrect. The Files Console is the management interface for Nutanix Files, used for configuring shares, FSVMs, and policies. While it can display alerts, it does not perform real-time ransomware scanning or detection—that functionality is provided by File Analytics.
Why Option B?
File Analytics is specifically designed for data analytics and security in Nutanix Files, including real-time ransomware detection. It monitors file operations, detects anomalies indicative of ransomware (e.g., rapid file modifications, deletions), and sends notifications to administrators, meeting the requirement for real-time scanning and alerts.
Exact Extract from Nutanix Documentation:
From the Nutanix File Analytics Administration Guide (available on the Nutanix Portal):
“Nutanix File Analytics provides real-time protection against ransomware by monitoring file operations and detecting anomalies, such as rapid file modifications or deletions. In the event of a potential ransomware attack, File Analytics sends notifications to administrators, allowing for quick response and mitigation.”
After migrating to Files for a company's user home directories, the administrator started receiving complaints that accessing certain files results in long wait times before the file is even opened or an access denied error message after four minutes. Upon further investigation, the administrator has determined that the files in question are very large audio and video files. Which two actions should the administrator take to mitigate this issue? (Choose two.)
Add the extensions of the affected file types to the ICAP's Exclude File Types field in the ICAP settings for the Files cluster.
Uncheck the "Block access to files if scan cannot be completed (recommended)" option in the ICAP settings for the Files cluster.
Enable the "Scan on Write" option and increase resources for the ICAP server.
Enable the "Scan on Read" option and decrease resources for the ICAP server.
Nutanix Files, part of Nutanix Unified Storage (NUS), is being used for user home directories, and users are experiencing delays or access denied errors when accessing large audio and video files. The issue is related to the integration with an ICAP (Internet Content Adaptation Protocol) server, which Nutanix Files uses to scan files for security (e.g., antivirus, malware detection). The delays and errors suggest that the ICAP server is struggling to scan these large files, causing timeouts or access issues.
Understanding the Issue:
ICAP Integration: Nutanix Files can integrate with an ICAP server to scan files for threats. By default, files are scanned on read and write operations, and if a scan cannot be completed (e.g., due to timeouts), access may be blocked.
Large Audio/Video Files: These files are typically very large (e.g., GBs in size), and scanning them can take significant time, especially if the ICAP server is under-resourced or the network latency is high.
Four-Minute Timeout: The “access denied” error after four minutes suggests a timeout in the ICAP scan process, likely because the ICAP server cannot complete the scan within the default timeout period (often 240 seconds or 4 minutes).
Long Wait Times: The wait times before opening files indicate that the ICAP server is scanning the files on read, causing delays for users.
Analysis of Options:
Option A (Add the extensions of the affected file types to the ICAP's Exclude File Types field in the ICAP settings for the Files cluster): Correct. Nutanix Files allows administrators to exclude certain file types from ICAP scanning by adding their extensions (e.g., .mp4, .wav) to the “Exclude File Types” field in the ICAP settings. Large audio and video files are often safe and do not need to be scanned (e.g., they are less likely to contain malware), and excluding them prevents the ICAP server from attempting to scan them, eliminating delays and timeout errors.
Option B (Uncheck the "Block access to files if scan cannot be completed (recommended)" option in the ICAP settings for the Files cluster): Correct. By default, Nutanix Files blocks access to files if the ICAP scan cannot be completed within the timeout period (e.g., 4 minutes), resulting in the “access denied” error. Unchecking this option allows access to files even if the scan fails or times out, mitigating the access denied issue for large files while still attempting to scan them. This is a recommended mitigation when scans are causing access issues, though it slightly reduces security by allowing access to un-scanned files.
Option C (Enable the "Scan on Write" option and increase resources for the ICAP server): Incorrect. The “Scan on Write” option is already enabled by default in Nutanix Files ICAP settings, as it ensures files are scanned when written to the share. Increasing resources for the ICAP server might help with scanning performance, but it does not directly address the issue of large files causing timeouts on read operations, and it requires additional infrastructure changes that may not be feasible. The issue is primarily with read access delays, not write operations.
Option D (Enable the "Scan on Read" option and decrease resources for the ICAP server): Incorrect. The “Scan on Read” option is also enabled by default in Nutanix Files ICAP settings, and it is the root cause of the delays—scanning large files on read causes long wait times. Decreasing resources for the ICAP server would exacerbate the issue by further slowing down scans, leading to more timeouts and errors.
Selected Actions:
A: Excluding audio and video file extensions from ICAP scanning prevents the server from attempting to scan large files, eliminating delays and timeouts for these file types.
B: Disabling the “Block access” option ensures that users can access files even if the ICAP scan times out, mitigating the “access denied” error after four minutes.
Why These Actions?
Excluding File Types (A): Large audio and video files are often safe and do not need scanning, and excluding them avoids the performance bottleneck caused by the ICAP server, directly addressing the long wait times.
Disabling Block Access (B): The four-minute timeout leading to “access denied” errors is due to the ICAP scan failing to complete. Allowing access despite scan failures ensures users can still open files, though it requires careful consideration of security risks (e.g., ensuring excluded file types are safe).
Combining these actions provides a comprehensive solution: excluding file types prevents unnecessary scans, and disabling the block ensures access during edge cases where scans might still occur.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“To mitigate performance issues with ICAP scanning for large files (e.g., audio, video), add the extensions of affected file types to the ‘Exclude File Types’ field in the ICAP settings for the Files cluster. Additionally, to prevent ‘access denied’ errors due to scan timeouts, uncheck the ‘Block access to files if scan cannot be completed (recommended)’ option, allowing access to files even if the scan fails.”
An administrator needs to protect a Files cluster unique policies for different shares.
How should the administrator meet this requirement?
Create a protection domain in the Data Protection view in Prism Element.
Configure data protection polices in File Server view in Prism Element
Create a protection domain in the Data Protection view in Prism Central.
Configure data protection polices in the Files view in Prism Central.
The administrator can meet this requirement by configuring data protection policies in the Files view in Prism Central. Data protection policies are policies that define how file data is protected by taking snapshots, replicating them to another site, or tiering them to cloud storage. Data protection policies can be configured for each share or export in a file server in the Files view in Prism Central. The administrator can create different data protection policies for different shares or exports based on their protection needs and requirements. References: Nutanix Files Administration Guide, page 79; Nutanix Files Solution Guide, page 9
An administrator has been tasked with creating a distributed share on a single-node cluster, but has been unable to successfully complete the task.
Why is this task failing?
File server version should be greater than 3.8.0
AOS version should be greater than 6.0.
Number of distributed shares limit reached.
Distributed shares require multiple nodes.
A distributed share is a type of SMB share or NFS export that distributes the hosting of top-level directories across multiple FSVMs, which improves load balancing and performance. A distributed share cannot be created on a single-node cluster, because there is only one FSVM available. A distributed share requires at least two nodes in the cluster to distribute the directories. Therefore, the task of creating a distributed share on a single-node cluster will fail. References: Nutanix Files Administration Guide, page 33; Nutanix Files Solution Guide, page 8
A distributed share in Nutanix Files, part of Nutanix Unified Storage (NUS), is a share that spans multiple File Server Virtual Machines (FSVMs) to provide scalability and high availability. Distributed shares are designed to handle large-scale workloads by distributing file operations across FSVMs.
Analysis of Options:
Option A (File server version should be greater than 3.8.0): Incorrect. While Nutanix Files has version-specific features, distributed shares have been supported since earlier versions (e.g., Files 3.5). The failure to create a distributed share on a single-node cluster is not due to the Files version.
Option B (Distributed shares require multiple nodes): Correct. Distributed shares in Nutanix Files require a minimum of three FSVMs for high availability and load balancing, which in turn requires a cluster with at least three nodes. A single-node cluster cannot support a distributed share because it lacks the necessary nodes to host multiple FSVMs, which are required for the distributed architecture.
Option C (AOS version should be greater than 6.0): Incorrect. Nutanix AOS (Acropolis Operating System) version 6.0 or later is not a specific requirement for distributed shares. Distributed shares have been supported in earlier AOS versions (e.g., AOS 5.15 and later with compatible Files versions). The issue is related to the cluster’s node count, not the AOS version.
Option D (Number of distributed shares limit reached): Incorrect. The question does not indicate that the administrator has reached a limit on the number of distributed shares. The failure is due to the single-node cluster limitation, not a share count limit.
Why Option B?
A single-node cluster cannot support a distributed share because Nutanix Files requires at least three FSVMs for a distributed share, and each FSVM typically runs on a separate node for high availability. A single-node cluster can support a non-distributed (standard) share, but not a distributed share, which is designed for scalability across multiple nodes.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Distributed shares in Nutanix Files require a minimum of three FSVMs to ensure scalability and high availability. This requires a cluster with at least three nodes, as each FSVM is typically hosted on a separate node. Single-node clusters do not support distributed shares due to this requirement.”
A Files administrator needs to generate a report listing the files matching those in the exhibit.
What is the most efficient way to complete this task?
Use Report Builder in File Analytics.
Create a custom report in Prism Central.
Use Report Builder in Files Console.
Create a custom report in Files Console.
The most efficient way to generate a report listing the files matching those in the exhibit is to use Report Builder in File Analytics. Report Builder is a feature that allows administrators to create custom reports based on various filters and criteria, such as file name, file type, file size, file owner, file age, file access time, file modification time, file permission change time, and so on. Report Builder can also export the reports in CSV format for further analysis or sharing. References: Nutanix Files Administration Guide, page 97; Nutanix File Analytics User Guide
What are the limitations for enabling Self-Service Restore (SSR) in a File Server? (Choose two.)
SSR is not supported at the root of distributed shares or exports.
SSR for SMB does not restore streams or attributes in directories.
SSR does not support NFS shares.
SSR does not support SMB shares.
Self-Service Restore (SSR) in Nutanix Files, part of Nutanix Unified Storage (NUS), allows users to recover previous versions of files without administrator intervention. SSR is primarily designed for SMB shares, and it has specific limitations that restrict its functionality in certain scenarios.
Analysis of Options:
Option A (SSR is not supported at the root of distributed shares or exports): Correct. According to Nutanix documentation, SSR cannot be enabled at the root level of distributed shares or exports. Distributed shares in Nutanix Files are those that span multiple FSVMs for scalability, and the root of such shares does not support SSR due to the complexity of managing snapshots at that level.
Option B (SSR for SMB does not restore streams or attributes in directories): Incorrect. While SSR has limitations, this specific restriction is not documented in Nutanix Files documentation. SSR for SMB does restore file data and metadata, including attributes, though it may not support all advanced features like alternate data streams in some cases. However, this is not a primary limitation highlighted in the official documentation.
Option C (SSR does not support NFS shares): Correct. SSR is designed for SMB shares and relies on Windows Shadow Copy (VSS) integration to provide Previous Versions functionality. NFS shares do not support SSR, as NFS lacks a native equivalent to VSS for user-driven restores.
Option D (SSR does not support SMB shares): Incorrect. This is the opposite of the truth—SSR is specifically designed for SMB shares and is not supported for NFS shares, as noted in option C.
Selected Limitations:
A: SSR’s inability to function at the root of distributed shares or exports is a documented limitation, as it affects how snapshots are managed in distributed environments.
C: SSR’s lack of support for NFS shares is a fundamental limitation, as SSR relies on SMB-specific features.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Self-Service Restore (SSR) is supported only for SMB shares and is not available for NFS shares or exports. Additionally, SSR cannot be enabled at the root of distributed shares or exports due to limitations in snapshot management at the root level.”
What are two ways to manage Objects? (Choose two.)
PC
CLI
API
SSH
There are two ways to manage Objects: PC (Prism Central) and API (Application Programming Interface). PC is a web-based user interface that allows administrators to create, configure, monitor, and manage Objects clusters, buckets, users, and policies. API is a set of S3-compatible REST APIs that allows applications and users to interact with Objects programmatically. API can be used to perform operations such as creating buckets, uploading objects, listing objects, downloading objects, deleting objects, and so on. References: Nutanix Objects User Guide; Nutanix Objects API Reference Guide
What is the network requirement for a File Analytics deployment?
Must use the CVM not work
Must use the Backplane network
Must use the Storage-side network
Must use the Client-side network
Nutanix File Analytics is a feature that provides insights into the usage and activity of file data stored on Nutanix Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. The FAVM collects metadata and statistics from the FSVMs and displays them in a graphical user interface (GUI). The FAVM must be deployed on the same network as the FSVMs, which is the Client-side network. This network is used for communication between File Analytics and FSVMs, as well as for accessing the File Analytics UI from a web browser. The Client-side network must have DHCP enabled and must be routable from the external hosts that access the file shares and File Analytics UI. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics Deployment Guide
A company's Marketing department requires the ability to recover files hosted in a Files share. They also require the ability to restore files within a timeframe of 14 days. Which two configurations are required to meet these requirements? (Choose two.)
Change default settings in the Protection Configuration window.
Change the Protection Domain settings to keep at least 14 days of snapshots.
Install Nutanix Guest Tools on clients who need to perform Self-Service Restore.
Enable Self-Service Restore at the share level.
The Marketing department needs to recover files in a Nutanix Files share with a recovery window of 14 days. Nutanix Files, part of Nutanix Unified Storage (NUS), supports file recovery through Self-Service Restore (SSR) for SMB shares, which relies on snapshots to provide previous versions of files.
Analysis of Options:
Option A (Change default settings in the Protection Configuration window): Incorrect. The “Protection Configuration window” is not a specific feature in Nutanix Files. This may be a vague reference to snapshot policies, but the correct terminology is Protection Domain or snapshot schedules, as in option B.
Option B (Change the Protection Domain settings to keep at least 14 days of snapshots): Correct. Nutanix Files uses snapshots to enable file recovery via SSR. These snapshots are managed through Protection Domains (or snapshot schedules in newer terminology) in Prism Element or Prism Central. To ensure files can be restored within a 14-day timeframe, the snapshot retention policy must be configured to retain snapshots for at least 14 days.
Option C (Install Nutanix Guest Tools on clients who need to perform Self-Service Restore): Incorrect. Nutanix Guest Tools (NGT) is used for VM management features (e.g., VSS snapshots for backups, VM mobility), but it is not required for Self-Service Restore in Nutanix Files. SSR is a client-side feature for SMB shares that works natively with Windows clients (via the Previous Versions tab) and does not require NGT.
Option D (Enable Self-Service Restore at the share level): Correct. Self-Service Restore (SSR) must be enabled at the share level in Nutanix Files to allow users to recover files without administrator intervention. This feature enables the Marketing department to restore files directly from their Windows clients using the Previous Versions feature, provided snapshots are available (as configured in option B).
Selected Configurations:
B: Configuring the snapshot retention to at least 14 days ensures that previous versions of files are available for recovery within the required timeframe.
D: Enabling SSR at the share level allows the Marketing department to perform the recovery themselves, meeting the requirement for user-driven file recovery.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Self-Service Restore (SSR) allows users to recover previous versions of files in SMB shares. To enable SSR, it must be activated at the share level in the Files Console. SSR relies on snapshots to provide previous versions; ensure that snapshot schedules (via Protection Domains or snapshot policies) are configured to retain snapshots for the desired recovery period, such as 14 days.”
Users are complaining about having to reconnect to a share when there are networking issues. Which Files feature should the administrator enable to ensure the sessions will auto-reconnect in such events?
Durable File Handles
Multi-Protocol Shares
Connected Shares
Workload Optimization
Nutanix Files, part of Nutanix Unified Storage (NUS), provides file sharing services via protocols like SMB and NFS. In environments where users access SMB shares, network interruptions can cause sessions to disconnect, requiring users to manually reconnect. Nutanix Files offers a feature to mitigate this issue for SMB shares.
Analysis of Options:
Option A (Durable File Handles): Correct. Durable File Handles is an SMB feature in Nutanix Files that allows client sessions to automatically reconnect after temporary network interruptions. When enabled, it ensures that file handles remain valid during brief disconnects, allowing the client to resume the session without manual intervention.
Option B (Multi-Protocol Shares): Incorrect. Multi-Protocol Shares allow a share to be accessed via both SMB and NFS, but this feature does not address session reconnection during network issues.
Option C (Connected Shares): Incorrect. “Connected Shares” is not a feature in Nutanix Files. This appears to be a made-up term and does not apply to session reconnection.
Option D (Workload Optimization): Incorrect. Workload Optimization in Nutanix Files involves adjusting the number of FSVMs or resources for performance (as noted in Question 13), but it does not address session reconnection for network issues.
Why Durable File Handles?
Durable File Handles is an SMB 2.1+ feature supported by Nutanix Files. It ensures that file handles persist during network disruptions, allowing clients to auto-reconnect without losing their session state, which directly addresses the users’ complaint.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Durable File Handles is an SMB feature that allows clients to automatically reconnect to a share after temporary network interruptions. When enabled on a Nutanix Files share, it ensures that file handles remain valid, preventing users from having to manually reconnect during brief network outages.”
After configuring Smart DR, an administrator is unable to see the policy in the Policies tab. The administrator has confirmed that all FSVMs are able to connect to Prism Central via port 9440 bidirectionally. What is the possible reason for this issue?
The primary and recovery file servers do not have the same version.
Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster.
The primary and recovery file servers do not have the same protocols.
Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster.
Smart DR in Nutanix Files, part of Nutanix Unified Storage (NUS), is a disaster recovery (DR) solution that simplifies the setup of replication policies between file servers (e.g., using NearSync, as seen in Question 24). After configuring a Smart DR policy, the administrator expects to see it in the Policies tab in Prism Central, but it is not visible despite confirmed connectivity between FSVMs and Prism Central via port 9440 (used for Prism communication, as noted in Question 21). This indicates a potential mismatch or configuration issue.
Analysis of Options:
Option A (The primary and recovery file servers do not have the same version): Correct. Smart DR requires that the primary and recovery file servers (source and target) run the same version of Nutanix Files to ensure compatibility. If the versions differ (e.g., primary on Files 4.0, recovery on Files 3.8), the Smart DR policy may fail to register properly in Prism Central, resulting in it not appearing in the Policies tab. This is a common issue in mixed-version environments, as Smart DR relies on consistent features and APIs across both file servers.
Option B (Port 7515 should be open for all External/Client IPs of FSVMs on the Source and Target cluster): Incorrect. Port 7515 is not a standard port for Nutanix Files or Smart DR communication. The External/Client network of FSVMs (used for SMB/NFS traffic) communicates with clients, not between FSVMs or with Prism Central for policy management. Smart DR communication between FSVMs and Prism Central uses port 9440 (already confirmed open), and replication traffic between FSVMs typically uses other ports (e.g., 2009, 2020), but not 7515.
Option C (The primary and recovery file servers do not have the same protocols): Incorrect. Nutanix Files shares can support multiple protocols (e.g., SMB, NFS), but Smart DR operates at the file server level, not the protocol level. The replication policy in Smart DR replicates share data regardless of the protocol, and a protocol mismatch would not prevent the policy from appearing in the Policies tab—it might affect client access, but not policy visibility.
Option D (Port 7515 should be open for all Internal/Storage IPs of FSVMs on the Source and Target cluster): Incorrect. Similar to option B, port 7515 is not relevant for Smart DR or Nutanix Files communication. The Internal/Storage network of FSVMs is used for communication with the Nutanix cluster’s storage pool, but Smart DR policy management and replication traffic do not rely on port 7515. The key ports for replication (e.g., 2009, 2020) are typically already open, and the issue here is policy visibility, not replication traffic.
Why Option A?
Smart DR requires compatibility between the primary and recovery file servers, including running the same version of Nutanix Files. A version mismatch can cause the Smart DR policy to fail registration in Prism Central, preventing it from appearing in the Policies tab. Since port 9440 connectivity is already confirmed, the most likely issue is a version mismatch, which is a common cause of such problems in Nutanix Files DR setups.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Smart DR requires that the primary and recovery file servers run the same version of Nutanix Files to ensure compatibility. A version mismatch between the source and target file servers can prevent the Smart DR policy from registering properly in Prism Central, resulting in the policy not appearing in the Policies tab.”
What is the result of an administrator applying the lifecycle policy "Expire current objects after # days/months/years" to an object with versioning enabled?
The policy deletes any past versions of the object after the specified time and does not delete any current version of the object.
The policy deletes the current version of the object after the specified time and does not delete any past versions of the object.
The policy does not delete the current version of the object after the specified time and does not delete any past versions of the object.
The policy deletes any past versions of the object after the specified time and deletes any current version of the object.
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports lifecycle policies to manage the retention and expiration of objects in a bucket. When versioning is enabled, a bucket can store multiple versions of an object, with the “current version” being the latest version and “past versions” being older iterations. The lifecycle policy “Expire current objects after # days/months/years” specifically targets the current version of an object.
Analysis of Options:
Option A (The policy deletes any past versions of the object after the specified time and does not delete any current version of the object): Incorrect. The “Expire current objects” policy targets the current version, not past versions. A separate lifecycle rule (e.g., “Expire non-current versions”) would be needed to delete past versions.
Option B (The policy deletes the current version of the object after the specified time and does not delete any past versions of the object): Correct. The “Expire current objects” policy deletes the current version of an object after the specified time period (e.g., # days/months/years). Since versioning is enabled, past versions are not affected by this policy and remain in the bucket unless a separate rule targets them.
Option C (The policy does not delete the current version of the object after the specified time and does not delete any past versions of the object): Incorrect. The policy explicitly states that it expires (deletes) the current version after the specified time, so this option contradicts the policy’s purpose.
Option D (The policy deletes any past versions of the object after the specified time and deletes any current version of the object): Incorrect. The “Expire current objects” policy does not target past versions—it only deletes the current version after the specified time.
Why Option B?
When versioning is enabled, the lifecycle policy “Expire current objects after # days/months/years” applies only to the current version of the object. After the specified time, the current version is deleted, and the most recent past version becomes the new current version (if no new uploads occur). Past versions are not deleted unless a separate lifecycle rule (e.g., for non-current versions) is applied.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“When versioning is enabled on a bucket, the lifecycle policy ‘Expire current objects after # days/months/years’ deletes the current version of an object after the specified time period. Past versions of the object are not affected by this policy and will remain in the bucket unless a separate lifecycle rule is applied to expire non-current versions.”
How can an administrator deploy a new instance of Files?
From LCM in Prism Central.
From LCM in Prism Element.
From the Storage view in Prism Element.
From the Files Console view in Prism Central.
The Files Console view in Prism Central is the primary interface for deploying and managing Files clusters. Administrators can use the Files Console to create a new instance of Files by providing the required information, such as cluster name, network configuration, storage capacity, and license key. References: Nutanix Files Administration Guide
Deploying a new instance of Nutanix Files is done through the Files Console view in Prism Central, where the administrator can create a new File Server, specify the number of FSVMs, configure networks (Client and Storage), and allocate storage. This is the standard and supported method for Files deployment, providing a centralized interface for managing Files instances.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Deployment Guide (available on the Nutanix Portal):
“To deploy a new instance of Nutanix Files, use the Files Console view in Prism Central. Navigate to the Files Console, select the option to create a new File Server, and configure the settings, including the number of FSVMs, network configuration, and storage allocation.”
An administrator needs to add a signature to the ransomware block list. How should the administrator complete this task?
Open a support ticket to have the new signature added. Nutanix support will provide an updated Block List file.
Add the file signature to the Blocked Files Type in the Files Console.
Search the Block List for the file signature to be added, click Add to Block List when the signature is not found in File Analytics.
Download the Block List CSV file, add the new signature, then upload the CSV.
Nutanix Files, part of Nutanix Unified Storage (NUS), can protect against ransomware using integrated tools like File Analytics and Data Lens, or through integration with third-party solutions. In Question 56, we established that a third-party solution is best for signature-based ransomware prevention with a large list of malicious file signatures (300+). The administrator now needs to add a new signature to the ransomware block list, which refers to the list of malicious file signatures used for blocking.
Analysis of Options:
Option A (Open a support ticket to have the new signature added. Nutanix support will provide an updated Block List file): Correct. Nutanix Files does not natively manage a signature-based ransomware block list within its own tools (e.g., File Analytics, Data Lens), as these focus on behavioral detection (as noted in Question 56). For signature-based blocking, Nutanix integrates with third-party solutions, and the block list (signature database) is typically managed by Nutanix or the third-party provider. To add a new signature, the administrator must open a support ticket with Nutanix, who will coordinate with the third-party provider (if applicable) to update the Block List file and provide it to the customer.
Option B (Add the file signature to the Blocked Files Type in the Files Console): Incorrect. The “Blocked Files Type” in the Files Console allows administrators to blacklist specific file extensions (e.g., .exe, .bat) to prevent them from being stored on shares. This is not a ransomware block list based on signatures—it’s a simple extension-based blacklist, and file signatures (e.g., hashes or patterns used for ransomware detection) cannot be added this way.
Option C (Search the Block List for the file signature to be added, click Add to Block List when the signature is not found in File Analytics): Incorrect. File Analytics provides ransomware detection through behavioral analysis (e.g., anomaly detection, as in Question 7), not signature-based blocking. There is no “Block List” in File Analytics for managing ransomware signatures, and it does not have an “Add to Block List” option for signatures.
Option D (Download the Block List CSV file, add the new signature, then upload the CSV): Incorrect. Nutanix Files does not provide a user-editable Block List CSV file for ransomware signatures. The block list for signature-based blocking is managed by Nutanix or a third-party integration, and updates are handled through support (option A), not by manually editing a CSV file.
Why Option A?
Signature-based ransomware prevention in Nutanix Files relies on third-party integrations, as established in Question 56. The block list of malicious file signatures is not user-editable within Nutanix tools like the Files Console or File Analytics. To add a new signature, the administrator must open a support ticket with Nutanix, who will provide an updated Block List file, ensuring the new signature is properly integrated with the third-party solution.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“For signature-based ransomware prevention, Nutanix Files integrates with third-party solutions that maintain a block list of malicious file signatures. To add a new signature to the block list, open a support ticket with Nutanix. Support will coordinate with the third-party provider (if applicable) and provide an updated Block List file to include the new signature.”
The Administrator needs to review the following graphs, as displayed in the exhibit.
* Storage Used
* Open Connections
* Number of Files
* Top Shares by Current Capacity
* Top Shares by current Connections
Where should the administrator complete this action?
Files Console Share View
Files Console Data Management View
Files Console Monitoring View
Files Console Dashboard View
The Files Console Dashboard View provides an overview of the Files cluster performance and usage, including the following graphs:
Storage Used: Shows the total storage used by the Files cluster, including data, metadata, and snapshots.
Open Connections: Shows the number of active SMB and NFS connections to the Files cluster.
Number of Files: Shows the number of files stored in the Files cluster, excluding snapshots.
Top Shares by Current Capacity: Shows the top five shares by current capacity usage in the Files cluster.
Top Shares by Current Connections: Shows the top five shares by current connection count in the Files cluster2. References: Nutanix Files Administration Guide2
Workload optimization on Files is configured on which entity?
Volume
Share
Container
File Server
Workload optimization in Nutanix Files, part of Nutanix Unified Storage (NUS), involves tuning the Files deployment to handle specific workloads efficiently. This was previously discussed in Question 13, where workload optimization was based on FSVM quantity. The question now asks which entity workload optimization is configured on.
Analysis of Options:
Option A (Volume): Incorrect. Volumes in Nutanix refer to block storage provided by Nutanix Volumes, not Nutanix Files. Workload optimization for Files does not involve Volumes, which are a separate entity for iSCSI-based storage.
Option B (Share): Incorrect. Shares in Nutanix Files are the individual file shares (e.g., SMB, NFS) accessed by clients. While shares can be tuned (e.g., quotas, permissions), workload optimization in Files is not configured at the share level—it applies to the broader file server infrastructure.
Option C (Container): Incorrect. Containers in Nutanix are logical storage pools managed by AOS, used to store data for VMs, Files, and other services. While Files data resides in a container, workload optimization is not configured at the container level—it is specific to the Files deployment.
Option D (File Server): Correct. Workload optimization in Nutanix Files is configured at the File Server level, which consists of multiple FSVMs (as established in Question 13). The File Server is the entity that manages all FSVMs, shares, and resources, and optimization tasks (e.g., scaling FSVMs, adjusting resources) are applied at this level to handle workloads efficiently.
Why Option D?
Workload optimization in Nutanix Files involves adjusting resources and configurations at the File Server level, such as scaling the number of FSVMs (as in Question 13) or tuning memory and CPU for the File Server. The File Server encompasses all FSVMs and shares, making it the entity where optimization is configured to ensure the entire deployment can handle the workload effectively.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Workload optimization in Nutanix Files is configured at the File Server level. This involves adjusting the number of FSVMs, allocating resources (e.g., CPU, memory), and tuning configurations to optimize the File Server for specific workloads.”
While creating a replication rule for a bucket, an administrator finds that the Object Store drop-down option under the Destination section shows an empty list. Which two conditions explain possible causes for this issue? (Choose two.)
The deployment of the Object Store is not in a running state.
The Remote site has not been configured in the Protection Group.
The deployment of the Object Store is not in a Complete state.
The logged-in user does not have permissions to view the Object Store.
Nutanix Objects, part of Nutanix Unified Storage (NUS), supports replication rules to replicate bucket data to a destination Object Store for disaster recovery or data redundancy. When creating a replication rule, the administrator selects a destination Object Store from a drop-down list. If this list is empty, it indicates that the system cannot display any available Object Stores, which can be due to several reasons.
Analysis of Options:
Option A (The deployment of the Object Store is not in a running state): Correct. For an Object Store to appear in the drop-down list as a replication destination, it must be in a running state. If the destination Object Store is not running (e.g., due to a failure, maintenance, or incomplete deployment), it will not be listed as an available target for replication.
Option B (The Remote site has not been configured in the Protection Group): Incorrect. Nutanix Objects replication does not use Protection Groups, which are a concept associated with Nutanix Files or VMs in Prism Central for disaster recovery. Objects replication is configured directly between Object Stores, typically requiring a remote site configuration, but this is not tied to Protection Groups. The issue of an empty drop-down list is more directly related to the Object Store’s state or permissions.
Option C (The deployment of the Object Store is not in a Complete state): Incorrect. While an incomplete deployment might prevent an Object Store from being fully operational, Nutanix documentation typically uses “running state” to describe the operational status of an Object Store (as in option A). “Complete state” is not a standard term in Nutanix Objects documentation for this context, making this option less accurate.
Option D (The logged-in user does not have permissions to view the Object Store): Correct. Nutanix Objects uses role-based access control (RBAC). If the logged-in user lacks the necessary permissions to view or manage the destination Object Store, it will not appear in the drop-down list. For example, the user may need “Object Store Admin” privileges to see and select Object Stores for replication.
Selected Conditions:
A: An Object Store not in a running state (e.g., stopped, failed, or under maintenance) will not appear as a destination for replication.
D: If the user lacks permissions to view the Object Store, it will not be visible in the drop-down list, even if the Object Store is running.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“When configuring a replication rule, the destination Object Store must be in a running state to appear in the drop-down list. Additionally, the user configuring the replication rule must have sufficient permissions (e.g., Object Store Admin role) to view and manage the destination Object Store. If the Object Store is not running or the user lacks permissions, the drop-down list will appear empty.”
An administrator sees that the Cluster drop-down or the Subnets drop-down shows empty lists or an error message when no Prism Element clusters or subnets are available for deployment, respectively. Additionally, the administrator sees that no Prism Element clusters are listed during the addition of multi-cluster to the Object Store. What would cause the Prism Element clusters or subnets to not appear in the user interface?
The logged-in user does not have access to any Prism Central.
The logged-in user does not have access to any subnets on the allowed Prism Central.
The administrator has just created an access policy granting user access to Prism Element.
The administrator has just created an access policy denying user access to a subnet in Prism Element.
Nutanix Objects, part of Nutanix Unified Storage (NUS), is deployed and managed through Prism Central (PC), which provides a centralized interface for managing multiple Prism Element (PE) clusters. When deploying Objects or adding multi-cluster support to an Object Store, the administrator selects a PE cluster and associated subnets from drop-down lists in the Prism Central UI. If these drop-down lists are empty or show an error, it indicates an issue with visibility or access to the clusters or subnets.
Analysis of Options:
Option A (The logged-in user does not have access to any Prism Central): Correct. Prism Central is required to manage Nutanix Objects deployments and multi-cluster configurations. If the logged-in user does not have access to any Prism Central instance (e.g., due to RBAC restrictions or no PC being deployed), they cannot see any PE clusters or subnets in the UI, as Prism Central is the interface that aggregates this information. This would result in empty drop-down lists for clusters and subnets, as well as during multi-cluster addition for the Object Store.
Option B (The logged-in user does not have access to any subnets on the allowed Prism Central): Incorrect. While subnet access restrictions could prevent subnets from appearing in the Subnets drop-down, this does not explain why the Cluster drop-down is empty or why no clusters are listed during multi-cluster addition. The issue is broader—likely related to Prism Central access itself—rather than subnet-specific permissions.
Option C (The administrator has just created an access policy granting user access to Prism Element): Incorrect. Granting access to Prism Element directly does not affect visibility in Prism Central’s UI. Objects deployment and multi-cluster management are performed through Prism Central, not Prism Element. Even if the user has PE access, they need PC access to see clusters and subnets in the Objects deployment workflow.
Option D (The administrator has just created an access policy denying user access to a subnet in Prism Element): Incorrect. Denying access to a subnet in Prism Element might affect subnet visibility in the Subnets drop-down, but it does not explain the empty Cluster drop-down or the inability to see clusters during multi-cluster addition. Subnet access policies are secondary to the broader issue of Prism Central access.
Why Option A?
The core issue is that Prism Central is required to display PE clusters and subnets in the UI for Objects deployment and multi-cluster management. If the logged-in user does not have access to any Prism Central instance (e.g., they are not assigned the necessary role, such as Prism Central Admin, or no PC is registered), the UI cannot display any clusters or subnets, resulting in empty drop-down lists. This also explains why no clusters are listed during multi-cluster addition for the Object Store, as Prism Central is the central management point for such operations.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Deployment Guide (available on the Nutanix Portal):
“Nutanix Objects deployment and multi-cluster management are performed through Prism Central. The logged-in user must have access to Prism Central with appropriate permissions (e.g., Prism Central Admin role) to view Prism Element clusters and subnets in the deployment UI. If the user does not have access to Prism Central, the Cluster and Subnets drop-down lists will be empty, and multi-cluster addition will fail to list available clusters.”
Workload optimization for Files is based on which entity?
Protocol
File type
FSVM quantity
Block size
Workload optimization in Nutanix Files, part of Nutanix Unified Storage (NUS), refers to the process of tuning the Files deployment to handle specific workloads efficiently. This involves scaling resources to match the workload demands, and the primary entity for optimization is the number of File Server Virtual Machines (FSVMs).
Analysis of Options:
Option A (Protocol): Incorrect. While Nutanix Files supports multiple protocols (SMB, NFS), workload optimization is not directly based on the protocol. Protocols affect client access, but optimization focuses on resource allocation.
Option B (File type): Incorrect. File type (e.g., text, binary) is not a factor in workload optimization for Files. Optimization focuses on infrastructure resources, not the nature of the files.
Option C (FSVM quantity): Correct. Nutanix Files uses FSVMs to distribute file service workloads across the cluster. Workload optimization involves adjusting the number of FSVMs to handle the expected load, ensuring balanced performance and scalability. For example, adding more FSVMs can improve performance for high-concurrency workloads.
Option D (Block size): Incorrect. Block size is relevant for block storage (e.g., Nutanix Volumes), but Nutanix Files operates at the file level, not the block level. Workload optimization in Files does not involve block size adjustments.
Why FSVM Quantity?
FSVMs are the core entities that process file operations in Nutanix Files. Optimizing for a workload (e.g., high read/write throughput, many concurrent users) typically involves scaling the number of FSVMs to distribute the load, adding compute and memory resources as needed, or adjusting FSVM placement for better performance.
Exact Extract from Nutanix Documentation:
From the Nutanix Files Administration Guide (available on the Nutanix Portal):
“Workload optimization in Nutanix Files is achieved by adjusting the number of FSVMs in the file server. For high-performance workloads, you can scale out by adding more FSVMs to distribute the load across the cluster, ensuring optimal resource utilization and performance.”
Which confirmation is required for an Objects deployment?
Configure Domain Controllers on both Prism Element and Prism Central.
Configure VPC on both Prism Element and Prism Central.
Configure a dedicated storage container on Prism Element or Prism Cent
Configure NTP servers on both Prism Element and Prism Central.
The configuration that is required for an Objects deployment is to configure NTP servers on both Prism Element and Prism Central. NTP (Network Time Protocol) is a protocol that synchronizes the clocks of devices on a network with a reliable time source. NTP servers are devices that provide accurate time information to other devices on a network. Configuring NTP servers on both Prism Element and Prism Central is required for an Objects deployment, because it ensures that the time settings are consistent and accurate across the Nutanix cluster and the Objects cluster, which can prevent any synchronization issues or errors. References: Nutanix Objects User Guide, page 9; Nutanix Objects Deployment Guide
Nutanix Objects can use no more than how many vCPUs for each AHV or ESXi node?
12
16
8
10
Nutanix Objects, a component of Nutanix Unified Storage (NUS), provides an S3-compatible object storage solution. It is deployed as a set of virtual machines (Object Store Service VMs) running on the Nutanix cluster’s hypervisor (AHV or ESXi). The resource allocation for these VMs, including the maximum number of vCPUs per node, is specified in the Nutanix Objects documentation to ensure optimal performance and resource utilization.
According to the official Nutanix documentation, each Object Store Service VM is limited to a maximum of 8 vCPUs per node (AHV or ESXi). This constraint ensures that the object storage service does not overburden the cluster’s compute resources, maintaining balance with other workloads.
Option C: Correct. The maximum number of vCPUs for Nutanix Objects per node is 8.
Option A (12), Option B (16), and Option D (10): Incorrect, as they exceed or do not match the documented maximum of 8 vCPUs per node.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Each Object Store Service VM deployed on an AHV or ESXi node is configured with a maximum of 8 vCPUs to ensure efficient resource utilization and performance. This limit applies per node hosting the Object Store Service.”
Additional Notes:
The vCPU limit is per Object Store Service VM on a given node, not for the entire Objects deployment. Multiple VMs may run across different nodes, but each is capped at 8 vCPUs.
The documentation does not specify different limits for AHV versus ESXi, so the 8 vCPU maximum applies universally.
Which metric is utilized when sizing a Files deployment based on performance requirements?
Quantity of SMB shares
SMB concurrent connections
NFS concurrent connections
Quantity of NFS exports
This metric indicates the number of active clients that are accessing the Files cluster via SMB protocol, which affects the performance of the Files cluster. NFS concurrent connections is also a relevant metric, but it is not the best answer, as it only applies to NFS protocol, not SMB. The quantity of SMB shares or NFS exports does not directly affect the performance of the Files cluster, as they are logical entities that do not consume resources. References: Nutanix Files Sizing Guide
An administrator is expanding an Objects store cluster. Which action should the administrator take to ensure the environment is configured properly prior to performing the installation?
Configure NTP on only Prism Central.
Upgrade MSP to 2.0 or later.
Upgrade Prism Element to 5.20 or later.
Configure DNS on only Prism Element.
Nutanix Objects, part of Nutanix Unified Storage (NUS), is deployed as Object Store Service VMs on a Nutanix cluster. Expanding an Objects store cluster involves adding more resources (e.g., nodes, Object Store Service VMs) to handle increased demand. Prior to expansion, the environment must meet certain prerequisites to ensure a successful installation.
Analysis of Options:
Option A (Configure NTP on only Prism Central): Incorrect. Network Time Protocol (NTP) synchronization is critical for Nutanix clusters, but it must be configured on both Prism Central and Prism Element (the cluster) to ensure consistent time across all components, including Object Store Service VMs. Configuring NTP on only Prism Central is insufficient and can lead to time synchronization issues during expansion.
Option B (Upgrade MSP to 2.0 or later): Incorrect. MSP (Microservices Platform) is a Nutanix component used for certain services, but it is not directly related to Nutanix Objects expansion. Objects relies on AOS and Prism versions, not MSP, and there is no specific MSP version requirement mentioned in Objects documentation for expansion.
Option C (Upgrade Prism Element to 5.20 or later): Correct. Nutanix Objects has specific version requirements for AOS (which runs on Prism Element) to support features and ensure compatibility during expansion. According to Nutanix documentation, AOS 5.20 or later is recommended for Objects deployments and expansions, as it includes stability improvements, bug fixes, and support for newer Objects features. Upgrading Prism Element to 5.20 or later ensures the environment is properly configured for a successful Objects store cluster expansion.
Option D (Configure DNS on only Prism Element): Incorrect. DNS configuration is important for name resolution in a Nutanix environment, but it must be configured for both Prism Element and Prism Central, as well as for the Object Store Service VMs. Configuring DNS on only Prism Element is insufficient, as Objects expansion requires proper name resolution across all components, including Prism Central for management.
Why Option C?
Expanding a Nutanix Objects store cluster requires the underlying AOS version (managed via Prism Element) to meet minimum requirements for compatibility and stability. AOS 5.20 or later includes necessary updates for Objects, making this upgrade a critical prerequisite to ensure the environment is properly configured for expansion. Other options, like NTP and DNS, are also important but require broader configuration, and MSP is not relevant in this context.
Exact Extract from Nutanix Documentation:
From the Nutanix Objects Administration Guide (available on the Nutanix Portal):
“Before expanding a Nutanix Objects store cluster, ensure that the environment meets the minimum requirements. Upgrade Prism Element to AOS 5.20 or later to ensure compatibility, stability, and support for Objects expansion features.”
What is the minimum and maximum file size limitations for Smart Tiering?
64 KiB minimum and 15 TiB maximum
128 IOB minimum and 5 TiB maximum
64 KiB minimum and 5 TiB maximum
128 KiB minimum and 13 TiB maximum
Smart Tiering is a feature that allows Files to tier data across different storage tiers based on the file size and access frequency. Smart Tiering supports files with a minimum size of 64 KiB and a maximum size of 5 TiB2. References: Nutanix Files Administration Guide2
An administrator is tasked with creating an Objects store with the following settings:
• Medium Performance (around 10,000 requests per second)
• 10 TiB capacity
• Versioning disabled
• Hosted on an AHV cluster
immediately after creation, the administrator is asked to change the name of Objects store
Who will the administrator achieve this request?
Enable versioning and then rename the Object store, disable versioning
The Objects store can only be renamed if hosted on ESXI.
Delete and recreate a new Objects store with the updated name
The administrator can achieve this request by deleting and recreating a new Objects store with the updated name. Objects is a feature that allows users to create and manage object storage clusters on a Nutanix cluster. Objects clusters can provide S3-compatible access to buckets and objects for various applications and users. Objects clusters can be created and configured in Prism Central. However, once an Objects cluster is created, its name cannot be changed or edited. Therefore, the only way to change the name of an Objects cluster is to delete the existing cluster and create a new cluster with the updated name. References: Nutanix Objects User Guide, page 9; Nutanix Objects Solution Guide, page 8
Which scenario is causing the alert and need to be addressed to allow the entities to be protected?
One or more VMs or Volume Groups belonging to the Consistency Group is part of multiple Recovery Plans configured with a Witness.
One or more VMs or Volume Groups belonging to the Consistency Group may have been deleted
The logical timestamp for one or more of the Volume Groups is not consistent between clusters
One or more VMs or Volume Groups belonging to the Consistency Group contains state metadata
The scenario that is causing the alert and needs to be addressed to allow the entities to be protected is that one or more VMs or Volume Groups belonging to the Consistency Group may have been deleted. A Consistency Group is a logical grouping of VMs or Volume Groups that are protected together by a Protection Policy. A Protection Policy is a set of rules that defines how often snapshots are taken, how long they are retained, and where they are replicated for disaster recovery purposes. If one or more VMs or Volume Groups belonging to the Consistency Group are deleted, the Protection Policy will fail to protect them and generate an alert with the code AI303551 – VolumeGroupProtectionFailed. References: Nutanix Volumes Administration Guide, page 29; Nutanix Volumes Troubleshooting Guide
A company uses Linux and Windows workstations. The administrator is evaluating solution for their file storage needs.
The solution should support these requirements:
• Distributed File System
• Active Directory integrated
• Scale out architecture
Mine
Objects
Volumes
Files
The solution that meets the company’s requirements for their file storage needs is Files. Files is a feature that allows users to create and manage file server instances (FSIs) on a Nutanix cluster. FSIs can provide SMB and NFS access to file shares and exports for different types of clients. Files supports these requirements:
Distributed File System: Files uses a distributed file system that spans across multiple FSVMs (File Server VMs), which improves scalability, performance, and availability.
Active Directory integrated: Files can integrate with Active Directory for authentication and authorization of SMB clients and multiprotocol NFS clients.
Scale out architecture: Files can scale out by adding more FSVMs to an existing FSI or creating new FSIs on the same or different clusters. References: Nutanix Files Administration Guide, page 27; Nutanix Files Solution Guide, page 6
An administrator is looking for a tool that includes these features:
• Permission Denials
• Top 5 Active Users
• Top 5 Accessed Files
• File Distribution by Type
Nutanix tool should the administrator choose?
File Server Manager
Prism Central
File Analytics
Files Console
The tool that includes these features is File Analytics. File Analytics is a feature that provides insights into the usage and activity of file data stored on Files. File Analytics consists of a File Analytics VM (FAVM) that runs on a Nutanix cluster and communicates with the File Server VMs (FSVMs) that host the file shares. File Analytics can display various reports and dashboards that include these features:
Permission Denials: This report shows the number of permission denied events for file operations, such as read, write, delete, etc., along with the user, file, share, and server details.
Top 5 Active Users: This dashboard shows the top five users who performed the most file operations in a given time period, along with the number and type of operations.
Top 5 Accessed Files: This dashboard shows the top five files that were accessed the most in a given time period, along with the number of accesses and the file details.
File Distribution by Type: This dashboard shows the distribution of files by their type or extension, such as PDF, DOCX, JPG, etc., along with the number and size of files for each type. References: Nutanix Files Administration Guide, page 93; Nutanix File Analytics User Guide
Copyright © 2014-2025 Certensure. All Rights Reserved