Which formats does Snowflake store unstructured data in? (Choose two.)
GeoJSON
Array
XML
Object
BLOB
Snowflake supports storing unstructured data and provides native support for semi-structured file formats such as JSON, Avro, Parquet, ORC, and XML1. GeoJSON, being a type of JSON, and XML are among the formats that can be stored in Snowflake. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake edition enables data sharing only through Snowflake Support?
Virtual Private Snowflake
Business Critical
Enterprise
Standard
The Snowflake edition that enables data sharing only through Snowflake Support is the Virtual Private Snowflake (VPS). By default, VPS does not permit data sharing outside of the VPS environment, but it can be enabled through Snowflake Support4.
Which Snowflake object can be accessed in he FROM clause of a query, returning a set of rows having one or more columns?
A User-Defined Table Function (UDTF)
A Scalar User Function (UDF)
A stored procedure
A task
In Snowflake, a User-Defined Table Function (UDTF) can be accessed in the FROM clause of a query. UDTFs return a set of rows with one or more columns, which can be queried like a regular table
Which kind of Snowflake table stores file-level metadata for each file in a stage?
Directory
External
Temporary
Transient
The kind of Snowflake table that stores file-level metadata for each file in a stage is a directory table. A directory table is an implicit object layered on a stage and stores file-level metadata about the data files in the stage3.
Which statement MOST accurately describes clustering in Snowflake?
The database ACCOUNTADMIN must define the clustering methodology for each Snowflake table.
Clustering is the way data is grouped together and stored within Snowflake micro-partitions.
The clustering key must be included in the COPY command when loading data into Snowflake.
Clustering can be disabled within a Snowflake account.
Clustering in Snowflake refers to the organization of data within micro-partitions, which are contiguous units of storage within Snowflake tables. Clustering keys can be defined to co-locate similar rows in the same micro-partitions, improving scan efficiency and query performance12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which commands should be used to grant the privilege allowing a role to select data from all current tables and any tables that will be created later in a schema? (Choose two.)
grant USAGE on all tables in schema DB1.SCHEMA to role MYROLE;
grant USAGE on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on future tables in schema DB1.SCHEMA to role MYROLE;
grant SELECT on all tables in database DB1 to role MYROLE;
grant SELECT on future tables in database DB1 to role MYROLE;
To grant a role the privilege to select data from all current and future tables in a schema, two separate commands are needed. The first command grants the SELECT privilege on all existing tables within the schema, and the second command grants the SELECT privilege on all tables that will be created in the future within the same schema.
How do Snowflake data providers share data that resides in different databases?
External tables
Secure views
Materialized views
User-Defined Functions (UDFs)
Snowflake data providers can share data residing in different databases through secure views. Secure views allow for the referencing of objects such as schemas, tables, and other views contained in one or more databases, as long as those databases belong to the same account. This enables providers to share data securely and efficiently with consumers. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Using variables in Snowflake is denoted by using which SQL character?
@
&
$
#
∗∗VeryComprehensiveExplanation=InSnowflake,variablesaredenotedbyadollarsign(). Variables can be used in SQL statements where a literal constant is allowed, and they must be prefixed with a $ sign to distinguish them from bind values and column names.
What is the MINIMUM Snowflake edition required to use the periodic rekeying of micro-partitions?
Enterprise
Business Critical
Standard
Virtual Private Snowflake
Periodic rekeying of micro-partitions is a feature that requires the Enterprise Edition of Snowflake or higher. This feature is part of Snowflake’s comprehensive approach to encryption key management, ensuring data security through regular rekeying1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What service is provided as an integrated Snowflake feature to enhance Multi-Factor Authentication (MFA) support?
Duo Security
OAuth
Okta
Single Sign-On (SSO)
Snowflake provides Multi-Factor Authentication (MFA) support as an integrated feature, powered by the Duo Security service. This service is managed completely by Snowflake, and users do not need to sign up separately with Duo1
Where is Snowflake metadata stored?
Within the data files
In the virtual warehouse layer
In the cloud services layer
In the remote storage layer
Snowflake’s architecture is divided into three layers: database storage, query processing, and cloud services. The metadata, which includes information about the structure of the data, the SQL operations performed, and the service-level policies, is stored in the cloud services layer. This layer acts as the brain of the Snowflake environment, managing metadata, query optimization, and transaction coordination.
Which query contains a Snowflake hosted file URL in a directory table for a stage named bronzestage?
list @bronzestage;
select * from directory(@bronzestage);
select metadata$filename from @bronzestage;
select * from table(information_schema.stage_directory_file_registration_history(
stage name=>'bronzestage1));
The query that contains a Snowflake hosted file URL in a directory table for a stage named bronzestage is select * from directory(@bronzestage). This query retrieves a list of all files on the stage along with metadata, including the Snowflake file URL for each file3.
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Grant the OWNERSHIP privilege on the schema.
Grant the OWNERSHIP privilege on the database.
Grant the MANAGE GRANTS global privilege.
Grant ALL privileges on the schema.
To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What happens to the shared objects for users in a consumer account from a share, once a database has been created in that account?
The shared objects are transferred.
The shared objects are copied.
The shared objects become accessible.
The shared objects can be re-shared.
Once a database has been created in a consumer account from a share, the shared objects become accessible to users in that account. The shared objects are not transferred or copied; they remain in the provider’s account and are accessible to the consumer account
What does Snowflake's search optimization service support?
External tables
Materialized views
Tables and views that are not protected by row access policies
Casts on table columns (except for fixed-point numbers cast to strings)
Snowflake’s search optimization service supports tables and views that are not protected by row access policies. It is designed to improve the performance of certain types of queries on tables, including selective point lookup queries and queries on fields in VARIANT, OBJECT, and ARRAY (semi-structured) columns1.
What can a Snowflake user do with the information included in the details section of a Query Profile?
Determine the total duration of the query.
Determine the role of the user who ran the query.
Determine the source system that the queried table is from.
Determine if the query was on structured or semi-structured data.
The details section of a Query Profile in Snowflake provides users with various statistics and information about the execution of a query. One of the key pieces of information that can be determined from this section is the total duration of the query, which helps in understanding the performance and identifying potential bottlenecks. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What can a Snowflake user do in the Admin area of Snowsight?
Analyze query performance.
Write queries and execute them.
Provide an overview of the listings in the Snowflake Marketplace.
Connect to Snowflake partners to explore extended functionality.
In the Admin area of Snowsight, users can analyze query performance, manage Snowflake warehouses, set up and view details about resource monitors, manage users and roles, and administer Snowflake accounts in their organization2.
At what levels can a resource monitor be configured? (Select TWO).
Account
Database
Organization
Schema
Virtual warehouse
Resource monitors in Snowflake can be configured at the account and virtual warehouse levels. They are used to track credit usage and control costs associated with running virtual warehouses. When certain thresholds are reached, resource monitors can trigger actions such as sending alerts or suspending warehouses to prevent excessive credit consumption. References: [COF-C02] SnowPro Core Certification Exam Study Guide
If queries start to queue in a multi-cluster virtual warehouse, an additional compute cluster starts immediately under what setting?
Auto-scale mode
Maximized mode
Economy scaling policy
Standard scaling policy
In Snowflake, when queries begin to queue in a multi-cluster virtual warehouse, an additional compute cluster starts immediately if the warehouse is set to auto-scale mode. This mode allows Snowflake to automatically add or resume additional clusters as soon as the workload increases, and similarly, shut down or pause the additional clusters when the load decreases
Which of the following activities consume virtual warehouse credits in the Snowflake environment? (Choose two.)
Caching query results
Running EXPLAIN and SHOW commands
Cloning a database
Running a custom query
Running COPY commands
Running EXPLAIN and SHOW commands, as well as running a custom query, consume virtual warehouse credits in the Snowflake environment. These activities require computational resources, and therefore, credits are used to account for the usage of these resources. References: [COF-C02] SnowPro Core Certification Exam Study Guide
In which Snowflake layer does Snowflake reorganize data into its internal optimized, compressed, columnar format?
Cloud Services
Database Storage
Query Processing
Metadata Management
Snowflake reorganizes data into its internal optimized, compressed, columnar format in the Database Storage layer. This process is part of how Snowflake manages data storage, ensuring efficient data retrieval and query performance
What effect does WAIT_FOR_COMPLETION = TRUE have when running an ALTER WAREHOUSE command and changing the warehouse size?
The warehouse size does not change until all queries currently running in the warehouse have completed.
The warehouse size does not change until all queries currently in the warehouse queue have completed.
The warehouse size does not change until the warehouse is suspended and restarted.
It does not return from the command until the warehouse has finished changing its size.
The WAIT_FOR_COMPLETION = TRUE parameter in an ALTER WAREHOUSE command ensures that the command does not return until the warehouse has completed resizing. This means that the command will wait until all the necessary compute resources have been provisioned and the warehouse size has been changed. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Query parsing and compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?
Cloud services layer
Compute layer
Storage layer
Cloud agnostic layer
Query parsing and compilation in Snowflake occur within the cloud services layer. This layer is responsible for various management tasks, including query compilation and optimization
What is the recommended way to change the existing file format type in my format from CSV to JSON?
ALTER FILE FORMAT my_format SET TYPE=JSON;
ALTER FILE FORMAT my format SWAP TYPE WITH JSON;
CREATE OR REPLACE FILE FORMAT my format TYPE-JSON;
REPLACE FILE FORMAT my format TYPE-JSON;
To change the existing file format type from CSV to JSON, the recommended way is to use the ALTER FILE FORMAT command with the SET TYPE=JSON clause. This alters the file format specification to use JSON instead of CSV. References: Based on my internal knowledge as of 2021.
What is the MAXIMUM size limit for a record of a VARIANT data type?
8MB
16MB
32MB
128MB
The maximum size limit for a record of a VARIANT data type in Snowflake is 16MB. This allows for storing semi-structured data types like JSON, Avro, ORC, Parquet, or XML within a single VARIANT column. References: Based on general database knowledge as of 2021.
Which parameter prevents streams on tables from becoming stale?
MAXDATAEXTENSIONTIMEINDAYS
MTN_DATA_RETENTION_TTME_TN_DAYS
LOCK_TIMEOUT
STALE_AFTER
The parameter that prevents streams on tables from becoming stale is MAXDATAEXTENSIONTIMEINDAYS. This parameter specifies the maximum number of days for which Snowflake can extend the data retention period for the table to prevent streams on the table from becoming stale4.
Which statement describes how Snowflake supports reader accounts?
A reader account can consume data from the provider account that created it and combine it with its own data.
A consumer needs to become a licensed Snowflake customer as data sharing is only supported between Snowflake accounts.
The users in a reader account can query data that has been shared with the reader account and can perform DML tasks.
The SHOW MANAGED ACCOUNTS command will view all the reader accounts that have been created for an account.
Snowflake supports reader accounts, which are a type of account that allows data providers to share data with consumers who are not Snowflake customers. However, for data sharing to occur, the consumer needs to become a licensed Snowflake customer because data sharing is only supported between Snowflake accounts. References: Introduction to Secure Data Sharing | Snowflake Documentation2.
Which statements reflect key functionalities of a Snowflake Data Exchange? (Choose two.)
If an account is enrolled with a Data Exchange, it will lose its access to the Snowflake Marketplace.
A Data Exchange allows groups of accounts to share data privately among the accounts.
A Data Exchange allows accounts to share data with third, non-Snowflake parties.
Data Exchange functionality is available by default in accounts using the Enterprise edition or higher.
The sharing of data in a Data Exchange is bidirectional. An account can be a provider for some datasets and a consumer for others.
A Snowflake Data Exchange allows groups of accounts to share data privately among the accounts (B), and it supports bidirectional sharing, meaning an account can be both a provider and a consumer of data (E). This facilitates secure and governed data collaboration within a selected group3.
Which parameter can be used to instruct a COPY command to verify data files instead of loading them into a specified table?
STRIP_NULL_VALUES
SKIP_BYTE_ORDER_MARK
REPLACE_INVALID_CHARACTERS
VALIDATION_MODE
The VALIDATION_MODE parameter can be used with the COPY command to verify data files without loading them into the specified table. This parameter allows users to check for errors in the files
How can a user improve the performance of a single large complex query in Snowflake?
Scale up the virtual warehouse.
Scale out the virtual warehouse.
Enable standard warehouse scaling.
Enable economy warehouse scaling.
Scaling up the virtual warehouse in Snowflake involves increasing the compute resources available for a single warehouse, which can improve the performance of large and complex queries by providing more CPU and memory resources. References: Based on general cloud data warehousing knowledge as of 2021.
Which languages requite that User-Defined Function (UDF) handlers be written inline? (Select TWO).
Java
Javascript
Scala
Python
SQL
User-Defined Function (UDF) handlers must be written inline for Javascript and SQL. These languages allow the UDF logic to be included directly within the SQL statement that creates the UDF2.
Which stages are used with the Snowflake PUT command to upload files from a local file system? (Choose three.)
Schema Stage
User Stage
Database Stage
Table Stage
External Named Stage
Internal Named Stage
The Snowflake PUT command is used to upload files from a local file system to Snowflake stages, specifically the user stage, table stage, and internal named stage. These stages are where the data files are temporarily stored before being loaded into Snowflake tables
Which URL type allows users to access unstructured data without authenticating into Snowflake or passing an authorization token?
Pre-signed URL
Scoped URL
Signed URL
File URL
Pre-signed URLs in Snowflake allow users to access unstructured data without the need for authentication into Snowflake or passing an authorization token. These URLs are open and can be directly accessed or downloaded by any user or application, making them ideal for business intelligence applications or reporting tools that need to display unstructured file contents
Which Snowflake object helps evaluate virtual warehouse performance impacted by query queuing?
Resource monitor
Account_usage. query_history
Information_schema.warehouse_load_history
Information schema.warehouse metering history
The Snowflake object that helps evaluate virtual warehouse performance impacted by query queuing is the Information_schema.warehouse_load_history. This view provides historical data about the load on a warehouse, including the average number of queries that were running or queued within a specific interval, which can be used to assess performance and identify potential issues with query queuing3.
Which features could be used to improve the performance of queries that return a small subset of rows from a large table? (Select TWO).
Search optimization service
Automatic clustering
Row access policies
Multi-cluster virtual warehouses
Secure views
The search optimization service and automatic clustering are features that can improve the performance of queries returning a small subset of rows from a large table. The search optimization service is designed for low-latency point lookup queries, while automatic clustering organizes data in micro-partitions based on specific dimensions to reduce the amount of data scanned during queries.
How does a Snowflake stored procedure compare to a User-Defined Function (UDF)?
A single executable statement can call only two stored procedures. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call only one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call multiple stored procedures. In contrast, multiple SQL statements can call the same UDFs.
Multiple executable statements can call more than one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
In Snowflake, stored procedures and User-Defined Functions (UDFs) have different invocation patterns within SQL:
Option B is correct: A single executable statement can call only one stored procedure due to the procedural and potentially transactional nature of stored procedures. In contrast, a single SQL statement can call multiple UDFs because UDFs are designed to operate more like functions in traditional programming, where they return a value and can be embedded within SQL queries.References: Snowflake documentation comparing the operational differences between stored procedures and UDFs.
What will happen if a Snowflake user increases the size of a suspended virtual warehouse?
The provisioning of new compute resources for the warehouse will begin immediately.
The warehouse will remain suspended but new resources will be added to the query acceleration service.
The provisioning of additional compute resources will be in effect when the warehouse is next resumed.
The warehouse will resume immediately and start to share the compute load with other running virtual warehouses.
When a Snowflake user increases the size of a suspended virtual warehouse, the changes to compute resources are queued but do not take immediate effect. The provisioning of additional compute resources occurs only when the warehouse is resumed. This ensures that resources are allocated efficiently, aligning with Snowflake's commitment to cost-effective and on-demand scalability.
References:
Snowflake Documentation: Virtual Warehouses
By default, which role has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
By default, the ACCOUNTADMIN role in Snowflake has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function. This function is used to set global account parameters, impacting the entire Snowflake account's configuration and behavior. The ACCOUNTADMIN role is the highest-level administrative role in Snowflake, granting the necessary privileges to manage account settings and security features, including the use of global account parameters.
References:
Snowflake Documentation: SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER
What criteria does Snowflake use to determine the current role when initiating a session? (Select TWO).
If a role was specified as part of the connection and that role has been granted to the Snowflake user, the specified role becomes the current role.
If no role was specified as part of the connection and a default role has been defined for the Snowflake user, that role becomes the current role.
If no role was specified as part of the connection and a default role has not been set for the Snowflake user, the session will not be initiated and the log in will fail.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, it will be ignored and the default role will become the current role.
If a role was specified as part of the connection and that role has not been granted to the Snowflake user, the role is automatically granted and it becomes the current role.
When initiating a session in Snowflake, the system determines the current role based on the user's connection details and role assignments. If a user specifies a role during the connection, and that role is already granted to them, Snowflake sets it as the current role for the session. Alternatively, if no role is specified during the connection, but the user has a default role assigned, Snowflake will use this default role as the current session role. These mechanisms ensure that users operate within their permissions, enhancing security and governance within Snowflake environments.
References:
Snowflake Documentation: Understanding Roles
Which data formats are supported by Snowflake when unloading semi-structured data? (Select TWO).
Binary file in Avro
Binary file in Parquet
Comma-separated JSON
Newline Delimited JSON
Plain text file containing XML elements
Snowflake supports a variety of file formats for unloading semi-structured data, among which Parquet and Newline Delimited JSON (NDJSON) are two widely used formats.
B. Binary file in Parquet: Parquet is a columnar storage file format optimized for large-scale data processing and analysis. It is especially suited for complex nested data structures.
D. Newline Delimited JSON (NDJSON): This format represents JSON records separated by newline characters, facilitating the storage and processing of multiple, separate JSON objects in a single file.
These formats are chosen for their efficiency and compatibility with data analytics tools and ecosystems, enabling seamless integration and processing of exported data.
References:
Snowflake Documentation: Data Unloading
Which statement accurately describes Snowflake's architecture?
It uses a local data repository for all compute nodes in the platform.
It is a blend of shared-disk and shared-everything database architectures.
It is a hybrid of traditional shared-disk and shared-nothing database architectures.
It reorganizes loaded data into internal optimized, compressed, and row-based format.
Snowflake's architecture is unique in that it combines elements of both traditional shared-disk and shared-nothing database architectures. This hybrid approach allows Snowflake to offer the scalability and performance benefits of a shared-nothing architecture (with compute and storage separated) while maintaining the simplicity and flexibility of a shared-disk architecture in managing data across all nodes in the system. This results in an architecture that provides on-demand scalability, both vertically and horizontally, without sacrificing performance or data cohesion.
References:
Snowflake Documentation: Snowflake Architecture
Regardless of which notation is used, what are considerations for writing the column name and element names when traversing semi-structured data?
The column name and element names are both case-sensitive.
The column name and element names are both case-insensitive.
The column name is case-sensitive but element names are case-insensitive.
The column name is case-insensitive but element names are case-sensitive.
When querying semi-structured data in Snowflake, the behavior towards case sensitivity is distinct between column names and the names of elements within the semi-structured data. Column names follow the general SQL norm of being case-insensitive, meaning you can reference them in any case without affecting the query. However, element names within JSON, XML, or other semi-structured data are case-sensitive. This distinction is crucial for accurate data retrieval and manipulation in Snowflake, especially when working with JSON objects where the case of keys can significantly alter the outcome of queries.
References:
Snowflake Documentation: Querying Semi-structured Data
What type of function returns one value for each Invocation?
Aggregate
Scalar
Table
Window
Scalar functions in Snowflake (and SQL in general) are designed to return a single value for each invocation. They operate on a single value and return a single result, making them suitable for a wide range of data transformations and calculations within queries.
References:
Snowflake Documentation: Functions
When unloading data, which file format preserves the data values for floating-point number columns?
Avro
CSV
JSON
Parquet
When unloading data, the Parquet file format is known for its efficiency in preserving the data values for floating-point number columns. Parquet is a columnar storage file format that offers high compression ratios and efficient data encoding schemes. It is especially effective for floating-point data, as it maintains high precision and supports efficient querying and analysis.
References:
Snowflake Documentation: Using the Parquet File Format for Unloading Data
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
1 minutes
5 minutes
10 minutes
15 minutes
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
Snowflake Documentation: Virtual Warehouses
Which command should be used to unload all the rows from a table into one or more files in a named stage?
COPY INTO
GET
INSERT INTO
PUT
To unload data from a table into one or more files in a named stage, the COPY INTO <location> command should be used. This command exports the result of a query, such as selecting all rows from a table, into files stored in the specified stage. The COPY INTO command is versatile, supporting various file formats and compression options for efficient data unloading.
References:
Snowflake Documentation: COPY INTO Location
What are characteristic of Snowsight worksheet? (Select TWO.)
Worksheets can be grouped under folder, and a folder of folders.
Each worksheet is a unique Snowflake session.
Users are limited to running only one on a worksheet.
The Snowflake session ends when a user switches worksheets.
Users can import worksheets and share them with other users.
Characteristics of Snowsight worksheets in Snowflake include:
A. Worksheets can be grouped under folders, and a folder of folders: This organizational feature allows users to efficiently manage and categorize their worksheets within Snowsight, Snowflake's web-based UI, enhancing the user experience by keeping related worksheets together.
E. Users can import worksheets and share them with other users: Snowsight supports the sharing of worksheets among users, fostering collaboration by allowing users to share queries, analyses, and findings. This feature is crucial for collaborative data exploration and analysis workflows.
References:
Snowflake Documentation: Snowsight (UI for Snowflake)
What activities can a user with the ORGADMIN role perform? (Select TWO).
Create an account for an organization.
Edit the account data for an organization.
Delete the account data for an organization.
View usage information for all accounts in an organization.
Select all the data in tables for all accounts in an organization.
The ORGADMIN role in Snowflake is an organizational-level role that provides administrative capabilities across the entire organization, rather than being limited to a single Snowflake account. Users with this role can:
A. Create an account for an organization: The ORGADMIN role has the privilege to create new Snowflake accounts within the organization, allowing for the expansion and management of the organization's resources.
D. View usage information for all accounts in an organization: This role also has access to comprehensive usage and activity data across all accounts within the organization. This is crucial for monitoring, cost management, and optimization at the organizational level.
References:
Snowflake Documentation: Understanding Role-Based Access Control
Which function is used to convert rows in a relational table to a single VARIANT column?
ARRAY_AGG
OBJECT_AGG
ARRAY_CONSTRUCT
OBJECT_CONSTRUCT
The OBJECT_CONSTRUCT function in Snowflake is used to convert rows in a relational table into a single VARIANT column that represents each row as a JSON object. This function dynamically creates a JSON object from a list of key-value pairs, where each key is a column name and each value is the corresponding column value for a row. This is particularly useful for aggregating and transforming structured data into semi-structured JSON format for further processing or analysis.
References:
Snowflake Documentation: Semi-structured Data Functions
What is the Fail-safe retention period for transient and temporary tables?
0 days
1 day
7 days
90 days
The Fail-safe retention period for transient and temporary tables in Snowflake is 0 days. Fail-safe is a feature designed to protect data against accidental loss or deletion by retaining historical data for a period after its Time Travel retention period expires. However, transient and temporary tables, which are designed for temporary or short-term storage and operations, do not have a Fail-safe period. Once the data is deleted or the table is dropped, it cannot be recovered.
References:
Snowflake Documentation: Understanding Fail-safe
Which command removes a role from another role or a user in Snowflak?
ALTER ROLE
REVOKE ROLE
USE ROLE
USE SECONDARY ROLES
The REVOKE ROLE command is used to remove a role from another role or a user in Snowflake. This command is part of Snowflake's role-based access control system, allowing administrators to manage permissions and access to database objects efficiently by adding or removing roles from users or other roles.
References:
Snowflake Documentation: REVOKE ROLE
Which SQL command can be used to verify the privileges that are granted to a role?
SHOW GRANTS ON ROLE
SHOW ROLES
SHOW GRANTS TO ROLE
SHOW GRANTS FOR ROLE
To verify the privileges that have been granted to a specific role in Snowflake, the correct SQL command is SHOW GRANTS TO ROLE <Role Name>. This command lists all the privileges granted to the specified role, including access to schemas, tables, and other database objects. This is a useful command for administrators and users with sufficient privileges to audit and manage role permissions within the Snowflake environment.
References:
Snowflake Documentation: SHOW GRANTS
How does Snowflake reorganize data when it is loaded? (Select TWO).
Binary format
Columnar format
Compressed format
Raw format
Zipped format
When data is loaded into Snowflake, it undergoes a reorganization process where the data is stored in a columnar format and compressed. The columnar storage format enables efficient querying and data retrieval, as it allows for reading only the necessary columns for a query, thereby reducing IO operations. Additionally, Snowflake uses advanced compression techniques to minimize storage costs and improve performance. This combination of columnar storage and compression is key to Snowflake's data warehousing capabilities.
References:
Snowflake Documentation: Data Storage and Organization
What is used to denote a pre-computed data set derived from a SELECT query specification and stored for later use?
View
Secure view
Materialized view
External table
A materialized view in Snowflake denotes a pre-computed data set derived from a SELECT query specification and stored for later use. Unlike standard views, which dynamically compute the data each time the view is accessed, materialized views store the result of the query at the time it is executed, thereby speeding up access to the data, especially for expensive aggregations on large datasets.
References:
Snowflake Documentation: Materialized Views
When using the ALLOW_CLI£NT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
A cached MFA token is valid for up to four hours. https://docs.snowflake.com/en/user-guide/security-mfa#using-mfa-token-caching-to-minimize-the-number-of-prompts-during-authentication-optional
The VALIDATE table function has which parameter as an input argument for a Snowflake user?
Last_QUERY_ID
CURRENT_STATEMENT
UUID_STRING
JOB_ID
The VALIDATE table function in Snowflake would typically use a unique identifier, such as a UUID_STRING, as an input argument. This function is designed to validate the data within a table against a set of constraints or conditions, often requiring a specific identifier to reference the particular data or job being validated.
References:
There is no direct reference to a VALIDATE table function with these specific parameters in Snowflake documentation. It seems like a theoretical example for understanding function arguments. Snowflake documentation on UDFs and system functions can provide guidance on how to create and use custom functions for similar purposes.
What are characteristics of transient tables in Snowflake? (Select TWO).
Transient tables have a Fail-safe period of 7 days.
Transient tables can be cloned to permanent tables.
Transient tables persist until they are explicitly dropped.
Transient tables can be altered to make them permanent tables.
Transient tables have Time Travel retention periods of 0 or 1 day.
Transient tables in Snowflake are designed for temporary or intermediate workloads with the following characteristics:
B. Transient tables can be cloned to permanent tables: This feature allows users to create copies of transient tables for permanent use, providing flexibility in managing data lifecycles.
C. Transient tables persist until they are explicitly dropped: Unlike temporary tables that exist for the duration of a session, transient tables remain in the database until explicitly removed by a user, offering more durability for short-term data storage needs.
References:
Snowflake Documentation: Transient Tables
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Caching
Cluster depth
Query pruning
Retrieval optimization
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
Snowflake Documentation: Query Performance Optimization
What information does the Query Profile provide?
Graphical representation of the data model
Statistics for each component of the processing plan
Detailed Information about I he database schema
Real-time monitoring of the database operations
The Query Profile in Snowflake provides a graphical representation and statistics for each component of the query's execution plan. This includes details such as the execution time, the number of rows processed, and the amount of data scanned for each operation within the query. The Query Profile is a crucial tool for understanding and optimizing the performance of queries, as it helps identify potential bottlenecks and inefficiencies.
References:
Snowflake Documentation: Understanding the Query Profile
A Snowflake user wants to temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY.
What should they do?
Use the SECURITYADMIN role.
Use the SYSADMIN role.
Use the USERADMIN role.
Contact Snowflake Support.
To temporarily bypass a network policy by configuring the user object property MINS_TO_BYPASS_NETWORK_POLICY, the USERADMIN role should be used. This role has the necessary privileges to modify user properties, including setting a temporary bypass for network policies, which can be crucial for enabling access under specific circumstances without permanently altering the network security configuration.
References:
Snowflake Documentation: User Management
When floating-point number columns are unloaded to CSV or JSON files, Snowflake truncates the values to approximately what?
(12,2)
(10,4)
(14,8)
(15,9)
When unloading floating-point number columns to CSV or JSON files, Snowflake truncates the values to approximately 15 significant digits with 9 digits following the decimal point, which can be represented as (15,9). This ensures a balance between accuracy and efficiency in representing floating-point numbers in text-based formats, which is essential for data interchange and processing applications that consume these files.
References:
Snowflake Documentation: Data Unloading Considerations
Which function will provide the proxy information needed to protect Snowsight?
SYSTEMADMIN_TAG
SYSTEM$GET_PRIVATELINK
SYSTEMSALLONTLIST
SYSTEMAUTHORIZE
The SYSTEM$GET_PRIVATELINK function in Snowflake provides proxy information necessary for configuring PrivateLink connections, which can protect Snowsight as well as other Snowflake services. PrivateLink enhances security by allowing Snowflake to be accessed via a private connection within a cloud provider’s network, reducing exposure to the public internet.
References:
Snowflake Documentation: PrivateLink Setup
What is the Fail-safe period for a transient table in the Snowflake Enterprise edition and higher?
0 days
1 day
7 days
14 days
The Fail-safe period for a transient table in Snowflake, regardless of the edition (including Enterprise edition and higher), is 0 days. Fail-safe is a data protection feature that provides additional retention beyond the Time Travel period for recovering data in case of accidental deletion or corruption. However, transient tables are designed for temporary or short-term use and do not benefit from the Fail-safe feature, meaning that once their Time Travel period expires, data cannot be recovered.
References:
Snowflake Documentation: Understanding Fail-safe
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
Snowflake Documentation: Encryption and Key Management
Which Snowflake layer is associated with virtual warehouses?
Cloud services
Query processing
Elastic memory
Database storage
The layer of Snowflake's architecture associated with virtual warehouses is the Query Processing layer. Virtual warehouses in Snowflake are dedicated compute clusters that execute SQL queries against the stored data. This layer is responsible for the entire query execution process, including parsing, optimization, and the actual computation. It operates independently of the storage layer, enabling Snowflake to scale compute and storage resources separately for efficiency and cost-effectiveness.
References:
Snowflake Documentation: Snowflake Architecture
A Snowflake user is writing a User-Defined Function (UDF) that includes some unqualified object names.
How will those object names be resolved during execution?
Snowflake will resolve them according to the SEARCH_PATH parameter.
Snowflake will only check the schema the UDF belongs to.
Snowflake will first check the current schema, and then the schema the previous query used
Snowflake will first check the current schema, and them the PUBLIC schema of the current database.
Object Name Resolution: When unqualified object names (e.g., table name without schema) are used in a UDF, Snowflake follows a specific hierarchy to resolve them. Here's the order:
Current Schema: Snowflake first checks if an object with the given name exists in the schema currently in use for the session.
PUBLIC Schema: If the object isn't found in the current schema, Snowflake looks in the PUBLIC schema of the current database.
Note: The SEARCH_PATH parameter influences object resolution for queries, not within UDFs.
References:
Snowflake Documentation (Object Naming Resolution): https://docs.snowflake.com/en/sql-reference/name-resolution.html
What is the only supported character set for loading and unloading data from all supported file formats?
UTF-8
UTF-16
ISO-8859-1
WINDOWS-1253
UTF-8 is the only supported character set for loading and unloading data from all supported file formats in Snowflake. UTF-8 is a widely used encoding that supports a large range of characters from various languages, making it suitable for internationalization and ensuring data compatibility across different systems and platforms.
References:
Snowflake Documentation: Data Loading and Unloading
How can a user get the MOST detailed information about individual table storage details in Snowflake?
SHOW TABLES command
SHOW EXTERNAL TABLES command
TABLES view
TABLE STORAGE METRICS view
To obtain the most detailed information about individual table storage details in Snowflake, the TABLE STORAGE METRICS view is the recommended option. This view provides comprehensive metrics on storage usage, including data size, time travel size, fail-safe size, and other relevant storage metrics for each table. This level of detail is invaluable for monitoring, managing, and optimizing storage costs and performance.
References:
Snowflake Documentation: Information Schema
What are characteristics of reader accounts in Snowflake? (Select TWO).
Reader account users cannot add new data to the account.
Reader account users can share data to other reader accounts.
A single reader account can consume data from multiple provider accounts.
Data consumers are responsible for reader account setup and data usage costs.
Reader accounts enable data consumers to access and query data shared by the provider.
Characteristics of reader accounts in Snowflake include:
A. Reader account users cannot add new data to the account: Reader accounts are intended for data consumption only. Users of these accounts can query and analyze the data shared with them but cannot upload or add new data to the account.
E. Reader accounts enable data consumers to access and query data shared by the provider: One of the primary purposes of reader accounts is to allow data consumers to access and perform queries on the data shared by another Snowflake account, facilitating secure and controlled data sharing.
References:
Snowflake Documentation: Reader Accounts
Which common query problems are identified by the Query Profile? (Select TWO.)
Syntax error
Inefficient pruning
Ambiguous column names
Queries too large to fit in memory
Object does not exist or not authorized
The Query Profile in Snowflake can identify common query problems, including:
B. Inefficient pruning: This refers to the inability of a query to effectively limit the amount of data being scanned, potentially leading to suboptimal performance.
D. Queries too large to fit in memory: This indicates that a query requires more memory than is available in the virtual warehouse, which can lead to spilling to disk and degraded performance.
The Query Profile helps diagnose these issues by providing detailed execution statistics and visualizations, aiding in query optimization and troubleshooting.
References:
Snowflake Documentation: Query Profile
Top of Form
The following settings are configured:
THE MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 at the account level.
THE DATA_RETENTION_TIME_IN_DAYS is set to 2 at the object level.
For how many days will the data be retained at the object level?
2
3
5
7
The settings shown in the image indicate that the data retention time in days is configured at two different levels: the account level and the object level. At the account level, the MIN_DATA_RETENTION_TIME_IN_DAYS is set to 5 days, and at the object level, the DATA_RETENTION_TIME_IN_DAYS is set to 2 days. Since the object level setting has a lower value, it takes precedence over the account level setting for the specific object. Therefore, the data will be retained for 2 days at the object level.References: Snowflake Documentation on Data Retention Policies
Which system_defined Snowflake role has permission to rename an account and specify whether the original URL can be used to access the renamed account?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
The ACCOUNTADMIN role in Snowflake has the highest level of privileges, including the ability to manage accounts, users, roles, and all objects within the account. This role is specifically granted the permission to rename an account and specify whether the original URL can be used to access the renamed account. The ACCOUNTADMIN role encompasses broad administrative capabilities, ensuring that users assigned this role can perform critical account management tasks.References: Snowflake Documentation on Roles and Permissions
Awarding a user which privileges on all virtual warehouses is equivalent to granting the user the global MANAGE WAREHOUSES privilege?
MODIFY, MONITOR and OPERATE privileges
ownership and usage privileges
APPLYBUDGET and audit privileges
MANAGE LISTING ADTOTOLFillment and resolve all privileges
Granting a user the MODIFY, MONITOR, and OPERATE privileges on all virtual warehouses in Snowflake is equivalent to granting the global MANAGE WAREHOUSES privilege. These privileges collectively provide comprehensive control over virtual warehouses.
MODIFY Privilege:
Allows users to change the configuration of the virtual warehouse.
Includes resizing, suspending, and resuming the warehouse.
MONITOR Privilege:
Allows users to view the status and usage metrics of the virtual warehouse.
Enables monitoring of performance and workload.
OPERATE Privilege:
Grants the ability to start and stop the virtual warehouse.
Includes pausing and resuming operations as needed.
References:
Snowflake Documentation: Warehouse Privileges
Which key access control concept does Snowflake descibe as a defined level of access to an object?
Grant
Privilege
Role
Session
In Snowflake, the term "privilege" refers to a defined level of access to an object. Privileges are specific actions that roles can perform on securable objects in Snowflake, such as tables, views, warehouses, databases, and schemas. These privileges are granted to roles and can be further granted to users through their roles, forming the basis of Snowflake’s access control framework.References: Snowflake Documentation on Access Control Privileges
When unloading data with the COPY into
To sort the contents of the output file by the specified expression.
To delimit the records in the output file using the specified expression.
To include a new column in the output using the specified window function expression.
To split the output into multiple files, one for each distinct value of the specified expression.
The PARTITION BY <expression> parameter option in the COPY INTO <location> command is used to split the output into multiple files based on the distinct values of the specified expression. This feature is particularly useful for organizing large datasets into smaller, more manageable files and can help with optimizing downstream processing or consumption of the data. For example, if you are unloading a large dataset of transactions and use PARTITION BY DATE(transactions.transaction_date), Snowflake generates a separate output file for each unique transaction date, facilitating easier data management and access.
This approach to data unloading can significantly improve efficiency when dealing with large volumes of data by enabling parallel processing and simplifying data retrieval based on specific criteria or dimensions.
References:
Snowflake Documentation on Unloading Data: COPY INTO
What objects can be cloned within Snowflake? (Select TWO).
Schemas
Users
External tables
Internal named stages
External named stages
In Snowflake, cloning is available for certain types of objects, allowing quick duplication without copying data:
Schemas: These can be cloned, enabling users to replicate entire schema structures, including tables and views, for development or testing.
Internal named stages: These stages, used to store data files within Snowflake, can also be cloned, preserving configurations for data loading.
Users and external objects (like external stages or tables) cannot be cloned due to their dependency on external data and configurations outside Snowflake.
Which command should be used to assign a key to a Snowflake user who needs to connect using key pair authentication?
ALTER USER jsmith SET RSA_P8_KEY='MIIBIjANBgkqh...';
ALTER USER jsmith SET ENCRYPTED_KEY=’MIIBIjANBgkqh...';
ALTER USER jsmith SET RSA_PRIVATE_KEY='MIIBIjANBgkqh...';
ALTER USER jsmith SET RSA_PUBLIC_KEY-MIIBIjANBgkqh...';
To use key pair authentication in Snowflake, you need to set the public key for the user. This allows the user to authenticate using their private key.
Generate Key Pair: Generate a public and private key pair.
Set Public Key:
ALTER USER jsmith SET RSA_PUBLIC_KEY='MIIBIjANBgkqh...';
Authentication: The user can now authenticate by signing requests with the corresponding private key.
References:
Snowflake Documentation: Key Pair Authentication & Key Rotation
Snowflake Documentation: ALTER USER
What would cause different results to be returned when running the same query twice?
SAMPLE is used and the seed is set
sample is used and the seed is not set.
Fraction-based sampling is used.
Fixed-size sampling Is used.
When using the SAMPLE clause in a query, if the seed is not set, Snowflake will use a different random seed for each execution of the query. This results in different rows being sampled each time, leading to different results. Setting a seed ensures that the same rows are sampled each time the query is run.
References:
Snowflake Documentation: Sampling
Which Snowflake database object can be shared with other accounts?
Tasks
Pipes
Secure User-Defined Functions (UDFs)
Stored Procedures
In Snowflake, Secure User-Defined Functions (UDFs) can be shared with other accounts using Snowflake's data sharing feature. This allows different Snowflake accounts to securely execute the UDFs without having direct access to the underlying data the functions operate on, ensuring privacy and security. The sharing is facilitated through shares created in Snowflake, which can contain Secure UDFs along with other database objects like tables and views.References: Snowflake Documentation on Data Sharing and Secure UDFs
Who can create and manage reader accounts? (Select TWO).
A user with ACCOUNTADMIN role
A user with securityadmin role
A user with SYSADMIN role
A user with ORGADMIH role
A user with CREATE ACCOUNT privilege
In Snowflake, reader accounts are special types of accounts that allow data sharing with external consumers without them having their own Snowflake account. The creation and management of reader accounts can be performed by users with the ACCOUNTADMIN role or the ORGADMIN role. The ACCOUNTADMIN role has comprehensive administrative privileges within a Snowflake account, including managing other accounts and roles. The ORGADMIN role, which is higher in hierarchy, oversees multiple accounts within an organization and can manage reader accounts across those accounts.
References:
Snowflake Documentation: Creating and Managing Reader Accounts
Which table function will identify data that was loaded using COPY INTO