Winter Special Flat 65% Limited Time Discount offer - Ends in 0d 00h 00m 00s - Coupon code: suredis

Microsoft DP-600 Implementing Analytics Solutions Using Microsoft Fabric Exam Practice Test

Demo: 20 questions
Total 101 questions

Implementing Analytics Solutions Using Microsoft Fabric Questions and Answers

Question 1

What should you use to implement calculation groups for the Research division semantic models?

Options:

A.

DAX Studio

B.

Microsoft Power Bl Desktop

C.

the Power Bl service

D.

Tabular Editor

Question 2

You need to refresh the Orders table of the Online Sales department. The solution must meet the semantic model requirements. What should you include in the solution?

Options:

A.

an Azure Data Factory pipeline that executes a dataflow to retrieve the minimum value of the OrderlD column in the destination lakehouse

B.

an Azure Data Factory pipeline that executes a Stored procedure activity to retrieve the maximum value of the OrderlD column in the destination lakehouse

C.

an Azure Data Factory pipeline that executes a dataflow to retrieve the maximum value of the OrderlD column in the destination lakehouse

D.

an Azure Data Factory pipeline that executes a Stored procedure activity to retrieve the minimum value of the OrderiD column m the

destination lakehouse

Question 3

Which workspace rote assignments should you recommend for ResearchReviewersGroupl and ResearchReviewersGroupZ? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 4

You need to recommend a solution to group the Research division workspaces.

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 5

You need to ensure that Contoso can use version control to meet the data analytics requirements and the general requirements. What should you do?

Options:

A.

Store all the semantic models and reports in Data Lake Gen2 storage.

B.

Modify the settings of the Research workspaces to use a GitHub repository.

C.

Store all the semantic models and reports in Microsoft OneDrive.

D.

Modify the settings of the Research division workspaces to use an Azure Repos repository.

Question 6

You need to recommend which type of fabric capacity SKU meets the data analytics requirements for the Research division. What should you recommend?

Options:

A.

EM

B.

F

C.

P

D.

A

Question 7

Which syntax should you use in a notebook to access the Research division data for Productlinel?

A)

B)

C)

D)

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question 8

You need to migrate the Research division data for Productline2. The solution must meet the data preparation requirements. How should you complete the code? To answer, select the appropriate options in the answer area

NOTE: Each correct selection is worth one point.

Options:

Question 9

You need to create a DAX measure to calculate the average overall satisfaction score.

How should you complete the DAX code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 10

You need to implement the date dimension in the data store. The solution must meet the technical requirements.

What are two ways to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

Options:

A.

Populate the date dimension table by using a dataflow.

B.

Populate the date dimension table by using a Stored procedure activity in a pipeline.

C.

Populate the date dimension view by using T-SQL.

D.

Populate the date dimension table by using a Copy activity in a pipeline.

Question 11

You have a Fabric tenant that contains customer churn data stored as Parquet files in OneLake. The data contains details about customer demographics and product usage.

You create a Fabric notebook to read the data into a Spark DataFrame. You then create column charts in the notebook that show the distribution of retained customers as compared to lost customers based on geography, the number of products purchased, age. and customer tenure.

Which type of analytics are you performing?

Options:

A.

prescriptive

B.

diagnostic

C.

descriptive

D.

predictive

Question 12

You have a Fabric tenant that contains a warehouse. The warehouse uses row-level security (RLS). You create a Direct Lake semantic model that uses the Delta tables and RLS of the warehouse. When users interact with a report built from the model, which mode will be used by the DAX queries?

Options:

A.

DirectQuery

B.

Dual

C.

Direct Lake

D.

Import

Question 13

You have a Fabric tenant that contains a lakehouse.

You plan to query sales data files by using the SQL endpoint. The files will be in an Amazon Simple Storage Service (Amazon S3) storage bucket.

You need to recommend which file format to use and where to create a shortcut.

Which two actions should you include in the recommendation? Each correct answer presents part of the solution.

NOTE: Each correct answer is worth one point.

Options:

A.

Create a shortcut in the Files section.

B.

Use the Parquet format

C.

Use the CSV format.

D.

Create a shortcut in the Tables section.

E.

Use the delta format.

Question 14

You have a Fabric tenant that contains a Microsoft Power Bl report named Report 1. Report1 includes a Python visual. Data displayed by the visual is grouped automatically and duplicate rows are NOT displayed. You need all rows to appear in the visual. What should you do?

Options:

A.

Reference the columns in the Python code by index.

B.

Modify the Sort Column By property for all columns.

C.

Add a unique field to each row.

D.

Modify the Summarize By property for all columns.

Question 15

You have a Fabric tenant that uses a Microsoft tower Bl Premium capacity. You need to enable scale-out for a semantic model. What should you do first?

Options:

A.

At the semantic model level, set Large dataset storage format to Off.

B.

At the tenant level, set Create and use Metrics to Enabled.

C.

At the semantic model level, set Large dataset storage format to On.

D.

At the tenant level, set Data Activator to Enabled.

Question 16

You have a Fabric tenant named Tenant1 that contains a workspace named WS1. WS1 uses a capacity named C1 and contains a dawset named DS1. You need to ensure read-write access to DS1 is available by using the XMLA endpoint. What should be modified first?

Options:

A.

the DS1 settings

B.

the WS1 settings

C.

the C1 settings

D.

the Tenant1 settings

Question 17

You have a Fabric tenant that contains a complex semantic model. The model is based on a star schema and contains many tables, including a fact table named Sales. You need to create a diagram of the model. The diagram must contain only the Sales table and related tables. What should you use from Microsoft Power Bl Desktop?

Options:

A.

data categories

B.

Data view

C.

Model view

D.

DAX query view

Question 18

You have a Fabric notebook that has the Python code and output shown in the following exhibit.

Which type of analytics are you performing?

Options:

A.

predictive

B.

descriptive

C.

prescriptive

D.

diagnostic

Question 19

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Fabric tenant that contains a semantic model named Model1.

You discover that the following query performs slowly against Model1.

You need to reduce the execution time of the query.

Solution: You replace line 4 by using the following code:

Does this meet the goal?

Options:

A.

Yes

B.

No

Question 20

You have a Fabric tenant that contains a new semantic model in OneLake.

You use a Fabric notebook to read the data into a Spark DataFrame.

You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.

Solution: You use the following PySpark expression:

df.explain()

Does this meet the goal?

Options:

A.

Yes

B.

No

Demo: 20 questions
Total 101 questions