Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 70percent

Google Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer Exam Practice Test

Google Cloud Certified - Associate Cloud Engineer Questions and Answers

Question 1

You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The subnet has the IP range 10.0.0.0/20. and the IP addresses are primarily used by virtual machines in the project. You need to provide more IP addresses for the virtual machines. What should you do?

Options:

A.

Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22.

B.

Change the subnet IP range from 10.0 0.0/20 to 10.0.0.0718.

C.

Add a secondary IP range 10.1.0.0/20 to the subnet.

D.

Convert the subnet IP range from IPv4 to IPv6

Question 2

Your organization has a dedicated person who creates and manages all service accounts for Google Cloud projects. You need to assign this person the minimum role for projects. What should you do?

Options:

A.

Add the user to roles/iam.roleAdmin role.

B.

Add the user to roles/iam.securityAdmin role.

C.

Add the user to roles/iam.serviceAccountUser role.

D.

Add the user to roles/iam.serviceAccountAdmin role.

Question 3

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

Options:

A.

Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.

B.

Create an instance template for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host maintenance’ to Terminate VM instances. Add the instance template to an instance group.

C.

Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).

D.

Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.

Question 4

You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do?

Options:

A.

Use kubectl app deploy .

B.

Use gcloud app deploy .

C.

Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

D.

Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

Question 5

You need to extract text from audio files by using the Speech-to-Text API. The audio files are pushed to a Cloud Storage bucket. You need to implement a fully managed, serverless compute solution that requires authentication and aligns with Google-recommended practices. You want to automate the call to the API by submitting each file to the API as the audio file arrives in the bucket. What should you do?

Options:

A.

Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

B.

Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

C.

Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.

D.

Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.

Question 6

You use Cloud Logging lo capture application logs. You now need to use SOL to analyze the application logs in Cloud Logging, and you want to follow Google-recommended practices. What should you do?

Options:

A.

Develop SQL queries by using Gemini for Google Cloud.

B.

Enable Log Analytics for the log bucket and create a linked dataset in BigQuery.

C.

Create a schema for the storage bucket and run SQL queries for the data in the bucket.

D.

Export logs to a storage bucket and create an external view in BigQuery.

Question 7

You have production and test workloads that you want to deploy on Compute Engine. Production VMs need to be in a different subnet than the test VMs. All the VMs must be able to reach each other over internal IP without creating additional routes. You need to set up VPC and the 2 subnets. Which configuration meets these requirements?

Options:

A.

Create a single custom VPC with 2 subnets. Create each subnet in a different region and with a different CIDR range.

B.

Create a single custom VPC with 2 subnets. Create each subnet in the same region and with the same CIDR range.

C.

Create 2 custom VPCs, each with a single subnet. Create each subnet is a different region and with a different CIDR range.

D.

Create 2 custom VPCs, each with a single subnet. Create each subnet in the same region and with the same CIDR range.

Question 8

You have been asked to set up Object Lifecycle Management for objects stored in storage buckets. The objects are written once and accessed frequently for 30 days. After 30 days, the objects are not read again unless there is a special need. The object should be kept for three years, and you need to minimize cost. What should you do?

Options:

A.

Set up a policy that uses Nearline storage for 30 days and then moves to Archive storage for three years.

B.

Set up a policy that uses Standard storage for 30 days and then moves to Archive storage for three years.

C.

Set up a policy that uses Nearline storage for 30 days, then moves the Coldline for one year, and then moves to Archive storage for two years.

D.

Set up a policy that uses Standard storage for 30 days, then moves to Coldline for one year, and then moves to Archive storage for two years.

Question 9

You have an application running in Google Kubernetes Engine (GKE) with cluster autoscaling enabled. The application exposes a TCP endpoint. There are several replicas of this application. You have a Compute Engine instance in the same region, but in another Virtual Private Cloud (VPC), called gce-network, that has no overlapping IP ranges with the first VPC. This instance needs to connect to the application on GKE. You want to minimize effort. What should you do?

Options:

A.

1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Set the service's externalTrafficPolicy to Cluster.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

B.

1. In GKE, create a Service of type NodePort that uses the application's Pods as backend.2. Create a Compute Engine instance called proxy with 2 network interfaces, one in each VPC.3. Use iptables on this instance to forward traffic from gce-network to the GKE nodes.4. Configure the Compute Engine instance to use the address of proxy in gce-network as endpoint.

C.

1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add an annotation to this service: cloud.google.com/load-balancer-type: Internal3. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the load balancer that has been created.

D.

1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.2. Add a Cloud Armor Security Policy to the load balancer that whitelists the internal IPs of the MIG's instances.3. Configure the Compute Engine instance to use the address of the load balancer that has been created.

Question 10

You need to manage a Cloud Spanner Instance for best query performance. Your instance in production runs in a single Google Cloud region. You need to improve performance in the shortest amount of time. You want to follow Google best practices for service configuration. What should you do?

Options:

A.

Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 45% If you exceed this threshold, add nodes lo your instance.

B.

Create an alert in Cloud Monitoring to alert when the percentage to high priority CPU utilization reaches 45% Use database query statistics to identify queries that result in high CPU usage, and then rewrite those queries to optimize their resource usage

C.

Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65% If you exceed this threshold, add nodes to your instance

D.

Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65%. Use database query statistics to identity queries that result in high CPU usage, and then rewrite those queries to optimize their resource usage.

Question 11

You are deploying an application on Google Cloud that requires a relational database for storage. To satisfy your company's security policies, your application must connect to your database through an encrypted and authenticated connection that requires minimal management and integrates with Identity and Access Management (IAM). What should you do?

Options:

A.

Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure a database user and password.

B.

Deploy a Cloud SOL database and configure IAM database authentication. Access the database through the Cloud SQL Auth Proxy.

C.

Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client certificates, and configure IAM database authentication.

D.

Deploy a Cloud SQL database and configure a database user and password. Access the database through the Cloud SQL Auth Proxy.

Question 12

You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end. What should you do?

Options:

A.

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.

B.

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.

C.

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.

D.

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Question 13

All development (dev) teams in your organization are located in the United States. Each dev team has its own Google Cloud project. You want to restrict access so that each dev team can only create cloud resources in the United States (US). What should you do?

Options:

A.

Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations.

B.

Create an organization to contain all the dev projects. Create an Identity and Access Management (IAM) policy to limit the resources in US regions.

C.

Create an Identity and Access Management

D.

Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles.

Question 14

Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do?

Options:

A.

Use gcloud to expand the IP range of the current subnet.

B.

Delete the subnet, and recreate it using a wider range of IP addresses.

C.

Create a new project. Use Shared VPC to share the current network with the new project.

D.

Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.

Question 15

Your company uses BigQuery to store and analyze data. Upon submitting your query in BigQuery, the query fails with a quotaExceeded error. You need to diagnose the issue causing the error. What should you do?

Choose 2 answers

Options:

A.

Search errors in Cloud Audit Logs to analyze the issue.

B.

Configure Cloud Trace to analyze the issue.

C.

View errors in Cloud Monitoring to analyze the issue.

D.

Use the information schema views to analyze the underlying issue.

E.

Use BigQuery Bl Engine to analyze the issue.

Question 16

You need to create an autoscaling managed instance group for an HTTPS web application. You want to make sure that unhealthy VMs are recreated. What should you do?

Options:

A.

Create a health check on port 443 and use that when creating the Managed Instance Group.

B.

Select Multi-Zone instead of Single-Zone when creating the Managed Instance Group.

C.

In the Instance Template, add the label ‘health-check’.

D.

In the Instance Template, add a startup script that sends a heartbeat to the metadata server.

Question 17

You are using Container Registry to centrally store your company’s container images in a separate project. In another project, you want to create a Google Kubernetes Engine (GKE) cluster. You want to ensure that Kubernetes can download images from Container Registry. What should you do?

Options:

A.

In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.

B.

When you create the GKE cluster, choose the Allow full access to all Cloud APIs option under ‘Access scopes’.

C.

Create a service account, and give it access to Cloud Storage. Create a P12 key for this service account and use it as an imagePullSecrets in Kubernetes.

D.

Configure the ACLs on each image in Cloud Storage to give read-only access to the default Compute Engine service account.

Question 18

You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

Options:

A.

Navigate to Stackdriver Logging and select resource.labels.project_id="*"

B.

Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.

C.

Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days.

D.

Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.

Question 19

(You are deploying an application to Google Kubernetes Engine (GKE). The application needs to make API calls to a private Cloud Storage bucket. You need to configure your application Pods to authenticate to the Cloud Storage API, but your organization policy prevents the usage of service account keys. You want to follow Google-recommended practices. What should you do?)

Options:

A.

Create the GKE cluster and deploy the application. Request a security exception to create a Google service account key. Set the constraints/iam.serviceAccountKeyExpiryHours organization policy to 8 hours.

B.

Create the GKE cluster and deploy the application. Request a security exception to create a Google service account key. Set the constraints/iam.serviceAccountKeyExpiryHours organization policy to 24 hours.

C.

Create the GKE cluster with Workload Identity Federation. Configure the default node service account to access the bucket. Deploy the application into the cluster so the application can use the node service account permissions. Use Identity and Access Management (IAM) to grant the service account access to the bucket.

D.

Create the GKE cluster with Workload Identity Federation. Create a Google service account and a Kubernetes ServiceAccount, and configure both service accounts to use Workload Identity Federation. Attach the Kubernetes ServiceAccount to the application Pods and configure the Google service account to access the bucket with Identity and Access Management (IAM).

Question 20

You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault- tolerant and can tolerate some of the VMs being terminated. The current cost of VMs is too high. What should you do?

Options:

A.

Run a test using simulated maintenance events. If the test is successful, use preemptible N1 Standard VMs when running future jobs.

B.

Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs when running future jobs.

C.

Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the managed instance group when running future jobs.

D.

Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.

Question 21

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do?

Options:

A.

Use granular logging statements within a Deployment Manager template authored in Python.

B.

Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console.

C.

Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures.

D.

Execute the Deployment Manager template using the –-preview option in the same project, and observe the state of interdependent resources.

Question 22

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

Options:

A.

Use the BigQuery interface to review the nightly Job and look for any errors

B.

Review the Error Reporting page in the Cloud Console to find any errors.

C.

In Cloud Logging create a filter for your Data Studio report

D.

Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed correctly.

Question 23

You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do?

Options:

A.

Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.

B.

Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.

C.

Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.

D.

Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.

Question 24

Your company requires all developers to have the same permissions, regardless of the Google Cloud project they are working on. Your company's security policy also restricts developer permissions to Compute Engine. Cloud Functions, and Cloud SQL. You want to implement the security policy with minimal effort. What should you do?

Options:

A.

• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.

• Copy the role across all projects created within the organization with the gcloud iam roles copy command.

• Assign the role to developers in those projects.

B.

• Add all developers to a Google group in Google Groups for Workspace.

• Assign the predefined role of Compute Admin to the Google group at the Google Cloud organization level.

C.

• Add all developers to a Google group in Cloud Identity.

• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the Google group for each project in the Google Cloud organization.

D.

• Add all developers to a Google group in Cloud Identity.

• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the Google Cloud organization level.

• Assign the custom role to the Google group.

Question 25

Your organization uses Active Directory (AD) to manage user identities. Each user uses this identity for federated access to various on-premises systems. Your security team has adopted a policy that requires users to log into Google Cloud with their AD identity instead of their own login. You want to follow the Google-recommended practices to implement this policy. What should you do?

Options:

A.

Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on

B.

Sync Identities in the Google Admin console, and then enable Oauth for single sign-on

C.

Sync identities with 3rd party LDAP sync, and then copy passwords to allow simplified login with (he same credentials

D.

Sync identities with Cloud Directory Sync, and then copy passwords to allow simplified login with the same credentials.

Question 26

You are developing a new web application that will be deployed on Google Cloud Platform. As part of your release cycle, you want to test updates to your application on a small portion of real user traffic. The majority of the users should still be directed towards a stable version of your application. What should you do?

Options:

A.

Deploy me application on App Engine For each update, create a new version of the same service Configure traffic splitting to send a small percentage of traffic to the new version

B.

Deploy the application on App Engine For each update, create a new service Configure traffic splitting to send a small percentage of traffic to the new service.

C.

Deploy the application on Kubernetes Engine For a new release, update the deployment to use the new version

D.

Deploy the application on Kubernetes Engine For a now release, create a new deployment for the new version Update the service e to use the now deployment.

Question 27

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?

Options:

A.

• Ensure that the Ops Agent Is Installed on the Compute Engine instance.

• Create a custom metric in the Cloud Monitoring dashboard.

• Provide the security team member with access to this dashboard.

B.

• Ensure that the Ops Agent is installed on tie Compute Engine instance.

• Provide the security team member roles/configure.inventoryViewer permission.

C.

• Ensure that the OS Config agent Is Installed on the Compute Engine instance.

• Provide the security team member roles/configure.vulnerabilityViewer permission.

D.

• Ensure that the OS Config agent is installed on the Compute Engine instance

• Create a log sink Co a BigQuery dataset.

• Provide the security team member with access to this dataset.

Question 28

Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do?

Options:

A.

Deploy the application on a managed instance group and configure autoscaling.

B.

Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.

C.

Deploy the application on Cloud Functions and configure the maximum number instances.

D.

Deploy the application on Cloud Run and configure autoscaling.

Question 29

Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below.

Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows:

• Instances in tier #1 must communicate with tier #2.

• Instances in tier #2 must communicate with tier #3.

What should you do?

Options:

A.

1. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)• Protocols: allow all

B.

1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service account• Protocols: allow TCP:80802. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all instances with tier #2 service account• Protocols: allow TCP: 8080

C.

1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service account• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all instances with tier #2 service account• Protocols: allow all

D.

1. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow TCP: 80802. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)• Protocols: allow TCP: 8080

Question 30

You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm. What should you do?

Options:

A.

Download the private key from the service account, and add it to each VMs custom metadata.

B.

Download the private key from the service account, and add the private key to each VM’s SSH keys.

C.

Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.

D.

When creating the VMs, set the service account’s API scope for Compute Engine to read/write.

Question 31

You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and network metrics for all three projects together. What should you do?

Options:

A.

1. Create a Cloud Monitoring Dashboard

2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three projects

B.

1. Create a Cloud Monitoring Dashboard.

2. Select the CPU and Network metrics from the three projects.

3. Add CPU and network Charts lot each of the three protects.

C.

1 Create a Service Account and apply roles/viewer on the three projects

2. Collect metrics and publish them lo the Cloud Monitoring API

3. Add CPU and network Charts for each of the three projects.

D.

1. Create a fourth Google Cloud project

2 Create a Cloud Workspace from the fourth project and add the other three projects

Question 32

(You are deploying a web application using Compute Engine. You created a managed instance group (MIG) to host the application. You want to follow Google-recommended practices to implement a secure and highly available solution. What should you do?)

Options:

A.

Use a proxy Network Load Balancer for the MIG and an A record in your DNS private zone with the load balancer's IP address.

B.

Use a proxy Network Load Balancer for the MIG and a CNAME record in your DNS public zone with the load balancer's IP address.

C.

Use an Application Load Balancer for the MIG and a CNAME record in your DNS private zone with the load balancer's IP address.

D.

Use an Application Load Balancer for the MIG and an A record in your DNS public zone with the load balancer's IP address.

Question 33

Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do?

Options:

A.

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.

B.

1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.

C.

1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.

D.

1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.

Question 34

You created a Google Cloud Platform project with an App Engine application inside the project. You initially configured the application to be served from the us-central region. Now you want the application to be served from the asia-northeast1 region. What should you do?

Options:

A.

Change the default region property setting in the existing GCP project to asia-northeast1.

B.

Change the region property setting in the existing App Engine application from us-central to asia-northeast1.

C.

Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application.

D.

Create a new GCP project and create an App Engine application inside this new project. Specify asia-northeast1 as the region to serve your application.

Question 35

Your customer wants you to create a secure website with autoscaling based on the compute instance CPU load. You want to enhance performance by storing static content in Cloud Storage. Which resources are needed to distribute the user traffic?

Options:

A.

An internal HTTP(S) load balancer together with Identity-Aware Proxy to allow only HTTPS traffic.

B.

An external HTTP(S) load balancer to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend. Install the HTTPS certificates on the instance.

C.

An external HTTP(S) load balancer with a managed SSL certificate to distribute the load and a URL map to target the requests for the static content to the Cloud Storage backend.

D.

An external network load balancer pointing to the backend instances to distribute the load evenly. The web servers will forward the request to the Cloud Storage as needed.

Question 36

Your application is running on Google Cloud in a managed instance group (MIG). You see errors in Cloud Logging for one VM that one of the processes is not responsive. You want to replace this VM in the MIG quickly. What should you do?

Options:

A.

Select the MIG from the Compute Engine console and, in the menu, select Replace VMs.

B.

Use the gcloud compute instance-groups managed recreate-instances command to recreate theVM.

C.

Use the gcloud compute instances update command with a REFRESH action for the VM.

D.

Update and apply the instance template of the MIG.

Question 37

You are given a project with a single virtual private cloud (VPC) and a single subnetwork in the us-central1 region. There is a Compute Engine instance hosting an application in thissubnetwork. You need to deploy a new instance in the same project in the europe-west1 region. This new instance needs access to the application. You want to follow Google-recommended practices. What should you do?

Options:

A.

1. Create a subnetwork in the same VPC, in europe-west1.2. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

B.

1. Create a VPC and a subnetwork in europe-west1.2. Expose the application with an internal load balancer.3. Create the new instance in the new subnetwork and use the load balancer's address as the endpoint.

C.

1. Create a subnetwork in the same VPC, in europe-west1.2. Use Cloud VPN to connect the two subnetworks.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

D.

1. Create a VPC and a subnetwork in europe-west1.2. Peer the 2 VPCs.3. Create the new instance in the new subnetwork and use the first instance's private address as the endpoint.

Question 38

You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What should you do?

Options:

A.

Rely on live migration to move the workload to a machine with more memory.

B.

Use gcloud to add metadata to the VM. Set the key to required-memory-size and the value to 8 GB.

C.

Stop the VM, change the machine type to n1-standard-8, and start the VM.

D.

Stop the VM, increase the memory to 8 GB, and start the VM.

Question 39

You are building an application that processes data files uploaded from thousands of suppliers. Your primary goals for the application are data security and the expiration of aged data. You need to design the application to:

•Restrict access so that suppliers can access only their own data.

•Give suppliers write access to data only for 30 minutes.

•Delete data that is over 45 days old.

You have a very short development cycle, and you need to make sure that the application requires minimal maintenance. Which two strategies should you use? (Choose two.)

Options:

A.

Build a lifecycle policy to delete Cloud Storage objects after 45 days.

B.

Use signed URLs to allow suppliers limited time access to store their objects.

C.

Set up an SFTP server for your application, and create a separate user for each supplier.

D.

Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.

E.

Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are older than 45 days.

Question 40

You need to select and configure compute resources for a set of batch processing jobs. These jobs take around 2 hours to complete and are run nightly. You want to minimize service costs. What should you do?

Options:

A.

Select Google Kubernetes Engine. Use a single-node cluster with a small instance type.

B.

Select Google Kubernetes Engine. Use a three-node cluster with micro instance types.

C.

Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type.

D.

Select Compute Engine. Use VM instance types that support micro bursting.

Question 41

Your company has a single sign-on (SSO) identity provider that supports Security Assertion Markup Language (SAML) integration with service providers. Your company has users in Cloud Identity. You would like users to authenticate using your company’s SSO provider. What should you do?

Options:

A.

In Cloud Identity, set up SSO with Google as an identity provider to access custom SAML apps.

B.

In Cloud Identity, set up SSO with a third-party identity provider with Google as a service provider.

C.

Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Mobile & Desktop Apps.

D.

Obtain OAuth 2.0 credentials, configure the user consent screen, and set up OAuth 2.0 for Web Server Applications.

Question 42

(You are managing an application deployed on Cloud Run. The development team has released a new version of the application. You want to deploy and redirect traffic to this new version of the application. To ensure traffic to the new version of the application is served with no startup time, you want to ensure that there are two idle instances available for incoming traffic before adjusting the traffic flow. You also want to minimize administrative overhead. What should you do?)

Options:

A.

Ensure the checkbox "Serve this revision immediately" is unchecked when deploying the new revision. Before changing the traffic rules, use a traffic simulation tool to send load to the new revision.

B.

Configure service autoscaling and set the minimum number of instances to 2.

C.

Configure revision autoscaling for the new revision and set the minimum number of instances to 2.

D.

Configure revision autoscaling for the existing revision and set the minimum number of instances to 2.

Question 43

You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine. What should you do?

Options:

A.

Upload the image to Cloud Storage and create a Kubernetes Service referencing the image.

B.

Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image.

C.

Upload the image to Container Registry and create a Kubernetes Service referencing the image.

D.

Upload the image to Container Registry and create a Kubernetes Deployment referencing the image.

Question 44

Your company has embraced a hybrid cloud strategy where some of the applications are deployed on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual Private Cloud (VPC) in Google Cloud with your company's on-premises network. Multiple applications in Google Cloud need to connect to an on-premises database server, and you want to avoid having to change the IP configuration in all of your applications when the IP of the database changes.

What should you do?

Options:

A.

Configure Cloud NAT for all subnets of your VPC to be used when egressing from the VM instances.

B.

Create a private zone on Cloud DNS, and configure the applications with the DNS name.

C.

Configure the IP of the database as custom metadata for each instance, and query the metadata server.

D.

Query the Compute Engine internal DNS from the applications to retrieve the IP of the database.

Question 45

You need to track and verity modifications to a set of Google Compute Engine instances in your Google Cloud project. In particular, you want to verify OS system patching events on your virtual machines (VMs). What should you do?

Options:

A.

Review the Compute Engine activity logs Select and review the Admin Event logs

B.

Review the Compute Engine activity logs Select and review the System Event logs

C.

Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs

D.

Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs

Question 46

You have a VM instance running in a VPC with single-stack subnets. You need to ensure that the VM instance has a fixed IP address so that other services hosted in the same VPC can communicate with the VM. You want to follow Google-recommended practices while minimizing cost. What should you do?

Options:

A.

Reserve a new static external IP address and assign the new IP address to the VM.

B.

Promote the existing IP address of the VM to become a static external IP address.

C.

Reserve a new static external IPv6 address and assign the new IP address to the VM.

D.

Promote the existing IP address of the VM to become a static internal IP address.

Question 47

Your company uses a large number of Google Cloud services centralized in a single project. All teams have specific projects for testing and development. The DevOps team needs access to all of theproduction services in order to perform their job. You want to prevent Google Cloud product changes from broadening their permissions in the future. You want to follow Google-recommended practices. What should you do?

Options:

A.

Grant all members of the DevOps team the role of Project Editor on the organization level.

B.

Grant all members of the DevOps team the role of Project Editor on the production project.

C.

Create a custom role that combines the required permissions. Grant the DevOps team the custom role on the production project.

D.

Create a custom role that combines the required permissions. Grant the DevOps team the custom role on the organization level.

Question 48

You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?

Options:

A.

– Create Compute Engine resources in us–central1–b.

–Balance the load across both us–central1–a and us–central1–b.

B.

– Create a Managed Instance Group and specify us–central1–a as the zone.

–Configure the Health Check with a short Health Interval.

C.

– Create an HTTP(S) Load Balancer.

–Create one or more global forwarding rules to direct traffic to your VMs.

D.

– Perform regular backups of your application.

–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.

–Restore from backups when notified.

Question 49

An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed until 2 weeks later. You need to find out this employee accessed any sensitive customer information after their termination. What should you do?

Options:

A.

View System Event Logs in Stackdriver. Search for the user’s email as the principal.

B.

View System Event Logs in Stackdriver. Search for the service account associated with the user.

C.

View Data Access audit logs in Stackdriver. Search for the user’s email as the principal.

D.

View the Admin Activity log in Stackdriver. Search for the service account associated with the user.

Question 50

You want to permanently delete a Pub/Sub topic managed by Config Connector in your Google Cloud project. What should you do?

Options:

A.

Use kubect1 to delete the topic resource.

B.

Use gcloud CLI to delete the topic.

C.

Use kubect1 to create the label deleted-by-cnrm and to change its value to true for the topic resource.

D.

Use gcloud CLI to update the topic label managed-by-cnrm to false.

Question 51

Your company publishes large files on an Apache web server that runs on a Compute Engine instance. The Apache web server is not the only application running in the project. You want to receive an email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud Platform (GCP). What should you do?

Options:

A.

Set up a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”

B.

Set up a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and notification type of “email.”

C.

Export the billing data to BigQuery. Create a Cloud Function that uses BigQuery to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly.

D.

Use the Stackdriver Logging Agent to export the Apache web server logs to Stackdriver Logging. Create a Cloud Function that uses BigQuery to parse the HTTP response log data in Stackdriver for the current month and sends an email if the size of all HTTP responses, multiplied by current GCP egress prices, totals over 100 dollars. Schedule the Cloud Function using Cloud Scheduler to run hourly.

Question 52

You need to verify that a Google Cloud Platform service account was created at a particular time. What should you do?

Options:

A.

Filter the Activity log to view the Configuration category. Filter the Resource type to Service Account.

B.

Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project.

C.

Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account.

D.

Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project.

Question 53

You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.

    $ kubectlgetpods

    NAME READY STATUS RESTARTS AGE

    nginx-84748895c4-nqqmt 1/1 Running 0 9m41s

    $ kubectldeletepod nginx-84748895c4-nqqmt

    pod nginx-84748895c4-nqqmt deleted

    $ kubectlgetpods

    NAME READY STATUS RESTARTS AGE

    nginx-84748895c4-k6bzl 1/1 Running 0 25s

What should you do to delete the deployment and avoid pod getting recreated?

Options:

A.

kubectl delete deployment nginx

B.

kubectl delete –deployment=nginx

C.

kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2

D.

kubectl delete inginx

Question 54

You are the organization and billing administrator for your company. The engineering team has the Project Creator role on the organization. You do not want the engineering team to be able to link projects to the billing account. Only the finance team should be able to link a project to a billing account, but they should not be able to make any other changes to projects. What should you do?

Options:

A.

Assign the finance team only the Billing Account User role on the billing account.

B.

Assign the engineering team only the Billing Account User role on the billing account.

C.

Assign the finance team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

D.

Assign the engineering team the Billing Account User role on the billing account and the Project Billing Manager role on the organization.

Question 55

Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices. What should you do?

Options:

A.

Add your SREs to roles/iam.roleAdmin role.

B.

Add your SREs to roles/accessapproval approver role.

C.

Add your SREs to a group and then add this group to roles/iam roleAdmin role.

D.

Add your SREs to a group and then add this group to roles/accessapproval approver role.

Question 56

You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices. What should you do?

Options:

A.

Add the auditors group to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

B.

Add the auditors group to two new custom IAM roles.

C.

Add the auditor user accounts to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

D.

Add the auditor user accounts to two new custom IAM roles.

Question 57

You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What should you do?

Options:

A.

1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic.2. Call your application on Cloud Run from the Cloud Function for every message.

B.

1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run.2. Create a Cloud Pub/Sub subscription for that topic.3. Make your application pull messages from that subscription.

C.

1. Create a service account.2. Give the Cloud Run Invoker role to that service account for your Cloud Run application.3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint.

D.

1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal.2. Create a Cloud Pub/Sub subscription for that topic.3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.

Question 58

You need to migrate invoice documents stored on-premises to Cloud Storage. The documents have the following storage requirements:

• Documents must be kept for five years.

• Up to five revisions of the same invoice document must be stored, to allow for corrections.

• Documents older than 365 days should be moved to lower cost storage tiers.

You want to follow Google-recommended practices to minimize your operational and development costs. What should you do?

Options:

A.

Enable retention policies on the bucket, and use Cloud Scheduler to invoke a Cloud Function to move or delete your documents based on their metadata.

B.

Enable retention policies on the bucket, use lifecycle rules to change the storage classes of the objects, set the number of versions, and delete old files.

C.

Enable object versioning on the bucket, and use Cloud Scheduler to invoke a Cloud Functions instance to move or delete your documents based on their metadata.

D.

Enable object versioning on the bucket, use lifecycle conditions to change the storage class of the objects, set the number of versions, and delete old files.

Question 59

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

Options:

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Question 60

You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up or down the number of Spanner nodes depending on traffic. What should you do?

Options:

A.

Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.

B.

Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshold. SREs would scale resources up or down accordingly.

C.

Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshold. Google support would scale resources up or down accordingly.

D.

Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshold. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.

Question 61

Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google’s recommended best practices. What should you do?

Options:

A.

Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.

B.

Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.

C.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.

D.

Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.

Question 62

An external member of your team needs list access to compute images and disks in one of your projects. You want to follow Google-recommended practices when you grant the required permissions to this user. What should you do?

Options:

A.

Create a custom role, and add all the required compute.disks.list and compute, images.list permissions as includedPermissions. Grant the custom role to the user at the project level.

B.

Create a custom role based on the Compute Image User role Add the compute.disks, list to the

includedPermissions field Grant the custom role to the user at the project level

C.

Grant the Compute Storage Admin role at the project level.

D.

Create a custom role based on the Compute Storage Admin role. Exclude unnecessary permissions from the custom role. Grant the custom role to the user at the project level.

Question 63

You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days. What should you do next?

Options:

A.

Fill in local SSD. Fill in persistent disk storage and snapshot storage.

B.

Fill in local SSD. Add estimated cost for cluster management.

C.

Select Add GPUs. Fill in persistent disk storage and snapshot storage.

D.

Select Add GPUs. Add estimated cost for cluster management.

Question 64

You have created a new project in Google Cloud through the gcloud command line interface (CLI) and linked a billing account. You need to create a new Compute

Engine instance using the CLI. You need to perform the prerequisite steps. What should you do?

Options:

A.

Create a Cloud Monitoring Workspace.

B.

Create a VPC network in the project.

C.

Enable the compute googleapis.com API.

D.

Grant yourself the IAM role of Compute Admin.

Question 65

You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular Google Cloud Platform project that the dev1 users should be able to connect to. What should you do?

Options:

A.

Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to ssh to that instance.

B.

Set metadata to enable-oslogin=true for the instance. Set the service account to no service account for that instance. Direct them to use the Cloud Shell to ssh to that instance.

C.

Enable block project wide keys for the instance. Generate an SSH key for each user in the dev1 group. Distribute the keys to dev1 users and direct them to use their third-party tools to connect.

D.

Enable block project wide keys for the instance. Generate an SSH key and associate the key with that instance. Distribute the key to dev1 users and direct them to use their third-party tools to connect.

Question 66

During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

Options:

A.

Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.

B.

Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.

C.

Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.

D.

Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.

Question 67

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do?

Options:

A.

Create a signed URL with a four-hour expiration and share the URL with the company.

B.

Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.

C.

Configure the storage bucket as a static website and furnish the object’s URL to the company. Delete the object from the storage bucket after four hours.

D.

Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.

Question 68

Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?

Options:

A.

Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.

B.

Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.

C.

Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.

D.

Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.

Question 69

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?

Options:

A.

Create a Cloud Memorystore for Redis instance with 32-GB capacity.

B.

Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.

C.

Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.

D.

Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Question 70

Your organization has user identities in Active Directory. Your organization wants to use Active Directory as their source of truth for identities. Your organization wants to have full control over the Google accounts used by employees for all Google services, including your Google Cloud Platform (GCP) organization. What should you do?

Options:

A.

Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.

B.

Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.

C.

Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.

D.

Ask each employee to create a Google account using self signup. Require that each employee use their company email address and password.

Question 71

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Options:

A.

Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.

B.

Recreate all the nodes of the GKE cluster to enable GPUs on all of them.

C.

Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.

D.

Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Question 72

You are developing a financial trading application that will be used globally. Data is stored and queried using a relational structure, and clients from all over the world should get the exact identical state of the data. The application will be deployed in multiple regions to provide the lowest latency to end users. You need to select a storage option for the application data while minimizing latency. What should you do?

Options:

A.

Use Cloud Bigtable for data storage.

B.

Use Cloud SQL for data storage.

C.

Use Cloud Spanner for data storage.

D.

Use Firestore for data storage.

Question 73

Your manager asks you to deploy a workload to a Kubernetes cluster. You are not sure of the workloads resource requirements or how the requirements might vary depending on usage patterns, external dependencies, or other factors. You need a solution that makes cost-effective recommendations regarding CPU and memory requirements, and allows the workload to function consistently in any situation. You want to follow Google-recommended practices. What should you do?

Options:

A.

Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler for suggestions.

B.

Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions.

C.

Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Cluster autoscaler for suggestions.

D.

Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Horizontal Pod Autoscaler for suggestions.

Question 74

Your application development team has created Docker images for an application that will be deployed on Google Cloud. Your team does not want to manage the infrastructure associated with this application. You need to ensure that the application can scale automatically as it gains popularity. What should you do?

Options:

A.

Create an Instance template with the container image, and deploy a Managed Instance Group withAutoscaling.

B.

Upload Docker images to Artifact Registry, and deploy the application on Google Kubernetes Engine usingStandard mode.

C.

Upload Docker images to the Cloud Storage, and deploy the application on Google Kubernetes Engine usingStandard mode.

D.

Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.

Question 75

Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute Engine instances. The pipeline will manage the entire cloud infrastructure through code. How can you ensure that the pipeline has appropriate permissions while your system is following security best practices?

Options:

A.

• Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure

provisioning.

• Use the human approvals IAM account for the provisioning.

B.

• Attach a single service account to the compute instances.

• Add minimal rights to the service account.

• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create, update, or delete resources.

C.

• Attach a single service account to the compute instances.

• Add all required Identity and Access Management (IAM) permissions to this service account to create, update, or delete resources

D.

• Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and

Access Management (IAM) permissions.

• Use a secret manager service to store the key files of the service accounts.

• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.

Question 76

You need to create a copy of a custom Compute Engine virtual machine (VM) to facilitate an expected increase in application traffic due to a business acquisition. What should you do?

Options:

A.

Create a Compute Engine snapshot of your base VM. Create your images from that snapshot.

B.

Create a Compute Engine snapshot of your base VM. Create your instances from that snapshot.

C.

Create a custom Compute Engine image from a snapshot. Create your images from that image.

D.

Create a custom Compute Engine image from a snapshot. Create your instances from that image.

Question 77

Your organization is a financial company that needs to store audit log files for 3 years. Your organization has hundreds of Google Cloud projects. You need to implement a cost-effective approach for log file retention. What should you do?

Options:

A.

Create an export to the sink that saves logs from Cloud Audit to BigQuery.

B.

Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket.

C.

Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery.

D.

Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.

Question 78

You recently discovered that your developers are using many service account keys during their development process. While you work on a long term improvement, you need to quickly implement a process to enforce short-lived service account credentials in your company. You have the following requirements:

• All service accounts that require a key should be created in a centralized project called pj-sa.

• Service account keys should only be valid for one day.

You need a Google-recommended solution that minimizes cost. What should you do?

Options:

A.

Implement a Cloud Run job to rotate all service account keys periodically in pj-sa. Enforce an org policy to deny service account key creation with an exception to pj-sa.

B.

Implement a Kubernetes Cronjob to rotate all service account keys periodically. Disable attachment of

service accounts to resources in all projects with an exception to pj-sa.

C.

Enforce an org policy constraint allowing the lifetime of service account keys to be 24 hours. Enforce an org policy constraint denying service account key creation with an exception on pj-sa.

D.

Enforce a DENY org policy constraint over the lifetime of service account keys for 24 hours. Disable attachment of service accounts to resources in all projects with an exception to pj-sa.

Question 79

Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most interested in auditing who accessed data in Cloud Storage buckets. You need to help the auditor access the data they need. What should you do?

Options:

A.

Assign the appropriate permissions, and then use Cloud Monitoring to review metrics

B.

Use the export logs API to provide the Admin Activity Audit Logs in the format they want

C.

Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log viewer that filters on Cloud Storage

D.

Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit Logs

Question 80

You are managing a Data Warehouse on BigQuery. An external auditor will review your company's processes, and multiple external consultants will need view access to the data. You need to provide them with view access while following Google-recommended practices. What should you do?

Options:

A.

Grant each individual external consultant the role of BigQuery Editor

B.

Grant each individual external consultant the role of BigQuery Viewer

C.

Create a Google Group that contains the consultants and grant the group the role of BigQuery Editor

D.

Create a Google Group that contains the consultants, and grant the group the role of BigQuery Viewer

Question 81

You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1 hour. You want to follow Google recommended practices. What should you do?

Options:

A.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.

B.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///.

C.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.

D.

Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///*

Question 82

You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?

Options:

A.

Deploy the application on GKE Autopilot.

B.

Deploy the application on GKE Standard.

C.

Deploy the application on Cloud Functions.

D.

Deploy the application on Cloud Run.

Question 83

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy?

Options:

A.

Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)

B.

Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.

C.

Use gsutil rewrite and set the Delete action to 275 days (365-90).

D.

Use gsutil rewrite and set the Delete action to 365 days.

Question 84

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?

Options:

A.

Deploy the monitoring pod in a StatefulSet object.

B.

Deploy the monitoring pod in a DaemonSet object.

C.

Reference the monitoring pod in a Deployment object.

D.

Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

Question 85

Your company developed a mobile game that is deployed on Google Cloud. Gamers are connecting to the game with their personal phones over the Internet. The game sends UDP packets to update the servers about the gamers' actions while they are playing in multiplayer mode. Your game backend can scale over multiple virtual machines (VMs), and you want to expose the VMs over a single IP address. What should you do?

Options:

A.

Configure an SSL Proxy load balancer in front of the application servers.

B.

Configure an Internal UDP load balancer in front of the application servers.

C.

Configure an External HTTP(s) load balancer in front of the application servers.

D.

Configure an External Network load balancer in front of the application servers.

Question 86

You are running a data warehouse on BigQuery. A partner company is offering a recommendation engine based on the data in your data warehouse. The partner company is also running their application on Google Cloud. They manage the resources in their own project, but they need access to the BigQuery dataset in your project. You want to provide the partner company with access to the dataset What should you do?

Options:

A.

Create a Service Account in your own project, and grant this Service Account access to BigGuery in your project

B.

Create a Service Account in your own project, and ask the partner to grant this Service Account access to BigQuery in their project

C.

Ask the partner to create a Service Account in their project, and have them give the Service Account access to BigQuery in their project

D.

Ask the partner to create a Service Account in their project, and grant their Service Account access to the BigQuery dataset in your project

Question 87

You are migrating a production-critical on-premises application that requires 96 vCPUs to perform its task. You want to make sure the application runs in a similar environment on GCP. What should you do?

Options:

A.

When creating the VM, use machine type n1-standard-96.

B.

When creating the VM, use Intel Skylake as the CPU platform.

C.

Create the VM using Compute Engine default settings. Use gcloud to modify the running instance to have 96 vCPUs.

D.

Start the VM using Compute Engine default settings, and adjust as you go based on Rightsizing Recommendations.

Question 88

You are responsible for a web application on Compute Engine. You want your support team to be notified automatically if users experience high latency for at least 5 minutes. You need a Google-recommended solution with no development cost. What should you do?

Options:

A.

Create an alert policy to send a notification when the HTTP response latency exceeds the specified threshold.

B.

Implement an App Engine service which invokes the Cloud Monitoring API and sends a notification in case of anomalies.

C.

Use the Cloud Monitoring dashboard to observe latency and take the necessary actions when the response latency exceeds the specified threshold.

D.

Export Cloud Monitoring metrics to BigQuery and use a Looker Studio dashboard to monitor your web applications latency.

Question 89

You need to deploy an application in Google Cloud using savorless technology. You want to test a new version of the application with a small percentage of production traffic. What should you do?

Options:

A.

Deploy the application lo Cloud. Run. Use gradual rollouts for traffic spelling.

B.

Deploy the application lo Google Kubemetes Engine. Use Anthos Service Mesh for traffic splitting.

C.

Deploy the application to Cloud functions. Saucily the version number in the functions name.

D.

Deploy the application to App Engine. For each new version, create a new service.

Question 90

You are deploying a production application on Compute Engine. You want to prevent anyone from accidentally destroying the instance by clicking the wrong button. What should you do?

Options:

A.

Disable the flag “Delete boot disk when instance is deleted.”

B.

Enable delete protection on the instance.

C.

Disable Automatic restart on the instance.

D.

Enable Preemptibility on the instance.

Question 91

You are migrating a business critical application from your local data center into Google Cloud. As part of your high-availability strategy, you want to ensure that any data used by the application will be immediately available if a zonal failure occurs. What should you do?

Options:

A.

Store the application data on a zonal persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.

B.

Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached.

C.

Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone.

D.

Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.

Question 92

Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment What should you do?

Options:

A.

Create a bash script that contains all requirement steps as gcloud commands

B.

Develop templates for the environment using Cloud Deployment Manager

C.

Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.

D.

Use the Cloud Console interface to provision and manage all related resources