Winter Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: suredis

Google Professional-Cloud-Database-Engineer Google Cloud Certified - Professional Cloud Database Engineer Exam Practice Test

Google Cloud Certified - Professional Cloud Database Engineer Questions and Answers

Question 1

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?

Options:

A.

Use Firestore.

B.

Use Cloud SQL with an external (public) IP address.

C.

Use an in-app embedded database.

D.

Use Cloud Spanner.

Question 2

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue. What should you do first?

Options:

A.

Increase the number of processing units.

B.

Modify the database schema, and add additional indexes.

C.

Shard data required by the application into multiple instances.

D.

Decrease the number of processing units.

Question 3

Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)

Options:

A.

Create secondary indexes on the replica.

B.

Create additional read replicas, and partition your analytics users to use different read replicas.

C.

Disable replication on the read replica, and set the flag for parallel replication on the read replica. Re-enable replication and optimize performance by setting flags on the primary instance.

D.

Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.

E.

Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.

Question 4

Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud. What should you do?

Options:

A.

Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).

B.

Migrate applications and Oracle databases to Compute Engine.

C.

Migrate applications to Cloud SQL.

D.

Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).

Question 5

Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime. What should you do?

Options:

A.

Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.

B.

Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.

C.

Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.

D.

Create an external replica, and use Cloud SQL to synchronize the data to the replica.

Question 6

Your company is developing a new global transactional application that must be ACID-compliant and have 99.999% availability. You are responsible for selecting the appropriate Google Cloud database to serve as a datastore for this new application. What should you do?

Options:

A.

Use Firestore.

B.

Use Cloud Spanner.

C.

Use Cloud SQL.

D.

Use Bigtable.

Question 7

You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?

Options:

A.

Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.

B.

Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences

C.

Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

D.

Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

Question 8

You are migrating a telehealth care company's on-premises data center to Google Cloud. The migration plan specifies:

PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios.

MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level.

You want to set up the environment with minimal administrative effort. What should you do?

Options:

A.

Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.

B.

Set up different organizations for each database type, and apply policy constraints at the organization level.

C.

Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.

D.

Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.

Question 9

You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?

Options:

A.

Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.

B.

Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.

C.

Use Database Migration Service to migrate your database.

D.

Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.

Question 10

Your retail organization is preparing for the holiday season. Use of catalog services is increasing, and your DevOps team is supporting the Cloud SQL databases that power a microservices-based application. The DevOps team has added instrumentation through Sqlcommenter. You need to identify the root cause of why certain microservice calls are failing. What should you do?

Options:

A.

Watch Query Insights for long running queries.

B.

Watch the Cloud SQL instance monitor for CPU utilization metrics.

C.

Watch the Cloud SQL recommenders for overprovisioned instances.

D.

Watch Cloud Trace for application requests that are failing.

Question 11

You are building a data warehouse on BigQuery. Sources of data include several MySQL databases located on-premises.

You need to transfer data from these databases into BigQuery for analytics. You want to use a managed solution that has low latency and is easy to set up. What should you do?

Options:

A.

Create extracts from your on-premises databases periodically, and push these extracts to Cloud Storage.

Upload the changes into BigQuery, and merge them with existing tables.

B.

Use Cloud Data Fusion and scheduled workflows to extract data from MySQL. Transform this data into the appropriate schema, and load this data into your BigQuery database.

C.

Use Datastream to connect to your on-premises database and create a stream. Have Datastream write to Cloud Storage. Then use Dataflow to process the data into BigQuery.

D.

Use Database Migration Service to replicate data to a Cloud SQL for MySQL instance. Create federated tables in BigQuery on top of the replicated instances to transform and load the data into your BigQuery database.

Question 12

Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?

Options:

A.

Use Database Migration Service to connect to your on-premises database, and choose continuous replication.

After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.

B.

Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.

Schedule downtime to run each Cloud Data Fusion pipeline.

Verify that the migration was successful.

Re-point the applications to the Cloud SQL for MySQL instance.

C.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in compressed format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

D.

Pause the on-premises applications.

Use the mysqldump utility to dump the database content in CSV format.

Run gsutil –m to move the dump file to Cloud Storage.

Use the Cloud SQL for MySQL import option.

After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.

Question 13

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

Options:

A.

Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.

B.

Use Cloud SQL Auth proxy.

C.

Connect to Cloud SQL using a connection that has SSL encryption.

D.

Use customer-managed encryption keys with Cloud SQL.

Question 14

You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?

Options:

A.

Configure the automated backups to use a regional Cloud Storage bucket as a custom location.

B.

Use the default configuration for the automated backups location.

C.

Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.

D.

Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.

Question 15

Your company is evaluating Google Cloud database options for a mission-critical global payments gateway application. The application must be available 24/7 to users worldwide, horizontally scalable, and support open source databases. You need to select an automatically shardable, fully managed database with 99.999% availability and strong transactional consistency. What should you do?

Options:

A.

Select Bare Metal Solution for Oracle.

B.

Select Cloud SQL.

C.

Select Bigtable.

D.

Select Cloud Spanner.

Question 16

You are designing for a write-heavy application. During testing, you discover that the write workloads are performant in a regional Cloud Spanner instance but slow down by an order of magnitude in a multi-regional instance. You want to make the write workloads faster in a multi-regional instance. What should you do?

Options:

A.

Place the bulk of the read and write workloads closer to the default leader region.

B.

Use staleness of at least 15 seconds.

C.

Add more read-write replicas.

D.

Keep the total CPU utilization under 45% in each region.

Question 17

Your organization has a busy transactional Cloud SQL for MySQL instance. Your analytics team needs access to the data so they can build monthly sales reports. You need to provide data access to the analytics team without adversely affecting performance. What should you do?

Options:

A.

Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team.

B.

Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team.

C.

Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team.

D.

Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.

Question 18

You are managing two different applications: Order Management and Sales Reporting. Both applications interact with the same Cloud SQL for MySQL database. The Order Management application reads and writes to the database 24/7, but the Sales Reporting application is read-only. Both applications need the latest data. You need to ensure that the Performance of the Order Management application is not affected by the Sales Reporting application. What should you do?

Options:

A.

Create a read replica for the Sales Reporting application.

B.

Create two separate databases in the instance, and perform dual writes from the Order Management application.

C.

Use a Cloud SQL federated query for the Sales Reporting application.

D.

Queue up all the requested reports in PubSub, and execute the reports at night.

Question 19

You are migrating an on-premises application to Compute Engine and Cloud SQL. The application VMs will live in their own project, separate from the Cloud SQL instances which have their own project. What should you do to configure the networks?

Options:

A.

Create a new VPC network in each project, and use VPC Network Peering to connect the two together.

B.

Create a Shared VPC that both the application VMs and Cloud SQL instances will use.

C.

Use the default networks, and leverage Cloud VPN to connect the two together.

D.

Place both the application VMs and the Cloud SQL instances in the default network of each project.

Question 20

You are configuring a new application that has access to an existing Cloud Spanner database. The new application reads from this database to gather statistics for a dashboard. You want to follow Google-recommended practices when granting Identity and Access Management (IAM) permissions. What should you do?

Options:

A.

Reuse the existing service account that populates this database.

B.

Create a new service account, and grant it the Cloud Spanner Database Admin role.

C.

Create a new service account, and grant it the Cloud Spanner Database Reader role.

D.

Create a new service account, and grant it the spanner.databases.select permission.

Question 21

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

Options:

A.

Use Database Migration Service to migrate all databases to Cloud SQL.

B.

Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.

C.

Use data replication tools and CDC tools to enable migration.

D.

Use a combination of Database Migration Service and partner tools to support the data migration strategy.

Question 22

Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)

Options:

A.

Use an auto-incrementing value as the primary key.

B.

Normalize the data model.

C.

Promote low-cardinality attributes in multi-attribute primary keys.

D.

Promote high-cardinality attributes in multi-attribute primary keys.

E.

Use bit-reverse sequential value as the primary key.

Question 23

An analytics team needs to read data out of Cloud SQL for SQL Server and update a table in Cloud Spanner. You need to create a service account and grant least privilege access using predefined roles. What roles should you assign to the service account?

Options:

A.

roles/cloudsql.viewer and roles/spanner.databaseUser

B.

roles/cloudsql.editor and roles/spanner.admin

C.

roles/cloudsql.client and roles/spanner.databaseReader

D.

roles/cloudsql.instanceUser and roles/spanner.databaseUser

Question 24

You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?

Options:

A.

Use the default app profile with single-cluster routing.

B.

Use the default app profile with multi-cluster routing.

C.

Create a custom app profile with multi-cluster routing.

D.

Create a custom app profile with single-cluster routing.

Question 25

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

Options:

A.

Use Cloud SQL with read replicas for throughput.

B.

Use Firestore, and rely on automatic serverless scaling.

C.

Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.

D.

Use Bigtable, and add nodes as necessary to achieve the required throughput.

Question 26

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Spanner

D.

Bigtable

Question 27

Your organization is running a low-latency reporting application on Microsoft SQL Server. In addition to the database engine, you are using SQL Server Analysis Services (SSAS), SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS) in your on-premises environment. You want to migrate your Microsoft SQL Server database instances to Google Cloud. You need to ensure minimal disruption to the existing architecture during migration. What should you do?

Options:

A.

Migrate to Cloud SQL for SQL Server.

B.

Migrate to Cloud SQL for PostgreSQL.

C.

Migrate to Compute Engine.

D.

Migrate to Google Kubernetes Engine (GKE).

Question 28

You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?

Options:

A.

Use log messages produced by Cloud SQL.

B.

Use Query Insights for Cloud SQL.

C.

Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.

D.

Use Cloud SQL instance monitoring in the Google Cloud Console.

Question 29

You are writing an application that will run on Cloud Run and require a database running in the Cloud SQL managed service. You want to secure this instance so that it only receives connections from applications running in your VPC environment in Google Cloud. What should you do?

Options:

A.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Use Cloud SQL Auth proxy to connect to the instance.

B.

1. Create your instance with a specified external (public) IP address.

2. Choose the VPC and create firewall rules to allow only connections from Cloud Run into your instance.

3. Connect to the instance using a connection pool to best manage connections to the instance.

C.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Use Cloud SQL Auth proxy to connect to the instance.

D.

1. Create your instance with a specified internal (private) IP address.

2. Choose the VPC with private service connection configured.

3. Configure the Serverless VPC Access connector in the same VPC network as your Cloud SQL instance.

4. Connect to the instance using a connection pool to best manage connections to the instance.

Question 30

Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices. What should you do?

Options:

A.

Use Firestore with Looker.

B.

Use Cloud Spanner with Data Studio.

C.

Use MongoD8 Atlas with Charts.

D.

Use Bigtable with Looker.

Question 31

You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?

Options:

A.

Create a VM in Compute Engine, and run a cron job.

B.

Create a Cloud Composer instance, and create a directed acyclic graph (DAG).

C.

Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.

D.

Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.

Question 32

You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices. What should you do?

Options:

A.

Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.

B.

Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.

C.

Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.

D.

Use Cloud Composer to execute a select * from table(s) query and export results.

Question 33

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?

Options:

A.

Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.

B.

Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.

C.

Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.

D.

Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.

Question 34

Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1. What should you do?

Options:

A.

Provision a Cloud SQL for MySQL instance in us-central1-a.

Create a multiple-zone instance in us-west1-b.

Create a read replica in us-east1-c.

B.

Provision a Cloud SQL for MySQL instance in us-central1-a.

Create a multiple-zone instance in us-central1-b.

Create a read replica in us-east1-b.

C.

Provision a Cloud SQL for MySQL instance in us-central1-a.

Create a multiple-zone instance in us-east-b.

Create a read replica in us-east1-c.

D.

Provision a Cloud SQL for MySQL instance in us-central1-a.

Create a multiple-zone instance in us-east1-b.

Create a read replica in us-central1-b.

Question 35

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?

Options:

A.

Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.

B.

Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.

C.

Use Memorystore to handle your low-latency requirements and for real-time analytics.

D.

Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

Question 36

Your company's mission-critical, globally available application is supported by a Cloud Spanner database. Experienced users of the application have read and write access to the database, but new users are assigned read-only access to the database. You need to assign the appropriate Cloud Spanner Identity and Access Management (IAM) role to new users being onboarded soon. What roles should you set up?

Options:

A.

roles/spanner.databaseReader

B.

roles/spanner.databaseUser

C.

roles/spanner.viewer

D.

roles/spanner.backupWriter

Question 37

You need to provision several hundred Cloud SQL for MySQL instances for multiple project teams over a one-week period. You must ensure that all instances adhere to company standards such as instance naming conventions, database flags, and tags. What should you do?

Options:

A.

Automate instance creation by writing a Dataflow job.

B.

Automate instance creation by setting up Terraform scripts.

C.

Create the instances using the Google Cloud Console UI.

D.

Create clones from a template Cloud SQL instance.

Question 38

You are running a transactional application on Cloud SQL for PostgreSQL in Google Cloud. The database is running in a high availability configuration within one region. You have encountered issues with data and want to restore to the last known pristine version of the database. What should you do?

Options:

A.

Create a clone database from a read replica database, and restore the clone in the same region.

B.

Create a clone database from a read replica database, and restore the clone into a different zone.

C.

Use the Cloud SQL point-in-time recovery (PITR) feature. Restore the copy from two hours ago to a new database instance.

D.

Use the Cloud SQL database import feature. Import last week's dump file from Cloud Storage.