Topic: DP-600 topic 1 question 72

HOTSPOT
-

You have an Azure Data Lake Storage Gen2 account named storage1 that contains a Parquet file named sales.parquet.

You have a Fabric tenant that contains a workspace named Workspace1.

Using a notebook in Workspace1, you need to load the content of the file to the default lakehouse. The solution must ensure that the content will display automatically as a table named Sales in Lakehouse explorer.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Re: DP-600 topic 1 question 72

delta
sales

Re: DP-600 topic 1 question 72

why? please provide some explanation

Re: DP-600 topic 1 question 72

I think it should only be sales because if saveastable is used, the argument should only be table name. Link: https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-notebook-load-data.

Re: DP-600 topic 1 question 72

Saveastable so it is directly the name of the tables don't need to add tables/name

Re: DP-600 topic 1 question 72

delta & sales.
other options do not work in the context of the given code fragments (e.g. files/sales is for external tables but the path-parameter is missing here)

Re: DP-600 topic 1 question 72

delta
tables/sales

Re: DP-600 topic 1 question 72

"The solution must ensure that the content will display automatically as a table named Sales in Lakehouse explorer." - so only Delta in Tables section, otherwise table won`t be displayed automatically.
Delta, Tables.

Re: DP-600 topic 1 question 72

i missed that it has saveastable, so "Tables/" is not needed, it`s included to save function.
So only
1.Delta
2.sales.

Re: DP-600 topic 1 question 72

A) delta
B) Table/ sales --> The solution must ensure that the content will display automatically as a table named Sales in Lakehouse explorer

Re: DP-600 topic 1 question 72

Load data with an Apache Spark API
In the code cell of the notebook, use the following code example to read data from the source and load it into Files, Tables, or both sections of your lakehouse.

To specify the location to read from, you can use the relative path if the data is from the default lakehouse of current notebook, or you can use the absolute ABFS path if the data is from other lakehouse. you can copy this path from the context menu of the data

Screenshot showing menu option of copy action.

Copy ABFS path : this return the absolute path of the file

Copy relative path for Spark : this return the relative path of the file in the default lakehouse

Python

Copy
df = spark.read.parquet("location to read from")

# Keep it if you want to save dataframe as CSV files to Files section of the default Lakehouse

df.write.mode("overwrite").format("csv").save("Files/ " + csv_table_name)

# Keep it if you want to save dataframe as Parquet files to Files section of the default Lakehouse

df.write.mode("overwrite").format("parquet").save("Files/" + parquet_table_name)

# Keep it if you want to save dataframe as a delta lake, parquet table to Tables section of the default Lakehouse

df.write.mode("overwrite").format("delta").saveAsTable(delta_table_name)

# Keep it if you want to save the dataframe as a delta lake, appending the data to an existing table

df.write.mode("append").format("delta").saveAsTable(delta_table_name)

Load data with a Pandas API
To support Pandas API, the default Lakehouse will be automatically mounted to the notebook. The mount point is '/lakehouse/default/'. You can use this mount point to read/write data from/to the default lakehouse. The "Copy File API Path" option from the context menu will return the File API path from that mount point. The path returned from the option Copy ABFS path also works for Pandas API.