DP-300 dumps

 Question #1 Topic 1

You have 20 Azure SQL databases provisioned by using the vCore purchasing model.

You plan to create an Azure SQL Database elastic pool and add the 20 databases.

Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the

solution.

NOTE: Each correct selection is worth one point.

A. total size of all the databases

B. geo-replication support

C. number of concurrently peaking databases * peak CPU utilization per database

D. maximum number of concurrent sessions for all the databases

E. total number of databases * average CPU utilization per database

Correct Answer: ACE 

CE: Estimate the vCores needed for the pool as follows:

For vCore-based purchasing model: MAX(<Total number of DBs X average vCore utilization per DB>, <Number of concurrently peaking DBs X

Peak vCore utilization per DB)

A: Estimate the storage space needed for the pool by adding the number of bytes needed for all the databases in the pool.

Reference:

https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overview






Correct Answer: Step 1: Attach the SSISDB database Step 2: Turn on the TRUSTWORTHY prop


Step 2: Turn on the TRUSTWORTHY property and the CLR property If you are restoring the SSISDB database to an SQL Server instance where the SSISDB catalog was never created, enable common language runtime (clr) Step 3: Open the master key for the SSISDB database Restore the master key by this method if you have the original password that was used to create SSISDB. open master key decryption by password = 'LS1Setup!' --'Password used when creating SSISDB' Alter Master Key Add encryption by Service Master Key Step 4: Encrypt a copy of the mater key by using the service master key Reference: https://docs.microsoft.com/en-us/sql/integration-services/backup-restore-and-move-the-ssis-catalog



You have an Azure SQL database that contains a table named factSales. FactSales contains the columns shown in the following table.




 FactSales has 6 billion rows and is loaded nightly by using a batch process. You must provide the greatest reduction in space for the database and maximize performance. 

Which type of compression provides the greatest space reduction for the database? 

A. page compression

 B. row compression 

C. columnstore compression

 D. columnstore archival compression

 Correct Answer: D  Columnstore tables and indexes are always stored with columnstore compression. You can further reduce the size of columnstore data by configuring an additional compression called archival compression. Note: Columnstore ג "€The columnstore index is also logically organized as a table with rows and columns, but the data is physically stored in a column-wise data format. Incorrect Answers: B: Rowstore ג "€The rowstore index is the traditional style that has been around since the initial release of SQL Server.

 For rowstore tables and indexes, use the data compression feature to help reduce the size of the database.

 Reference: https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression


You have a Microsoft SQL Server 2019 database named DB1 that uses the following database-level and instance-level features. ✑ Clustered columnstore indexes ✑ Automatic tuning ✑ Change tracking ✑ PolyBase You plan to migrate DB1 to an Azure SQL database. What feature should be removed or replaced before DB1 can be migrated? 

A. Clustered columnstore indexes 

B. PolyBase

 C. Change tracking 

D. Automatic tuning Correct 

Answer: B 


You have a Microsoft SQL Server 2019 instance in an on-premises datacenter. The instance contains a 4-TB database named DB1. You plan to migrate DB1 to an Azure SQL Database managed instance. What should you use to minimize downtime and data loss during the migration?

 A. distributed availability groups

 B. database mirroring

 C. Always On Availability Group 

D. Azure Database Migration Service Correct Answer: D 


























You are designing a streaming data solution that will ingest variable volumes of data. You need to ensure that you can change the partition count after creation. Which service should you use to ingest the data?
 A. Azure Event Hubs Standard 
B. Azure Stream Analytics 
C. Azure Data Factory
 D. Azure Event Hubs Dedicated Correct Answer:
 D  The partition count for an event hub in a dedicated Event Hubs cluster can be increased after the event hub has been created.
 Incorrect Answers: A: For Azure Event standard hubs, the partition count isn't changeable, so you should consider long-term scale when setting partition count.

 Reference: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#partitions















You have an Azure Synapse Analytics Apache Spark pool named Pool1. You plan to load JSON files from an Azure Data Lake Storage Gen2 container into the tables in Pool1. The structure and data types vary by file. You need to load the files into the tables. The solution must maintain the source data types. What should you do? 
A. Load the data by using PySpark. 
B. Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics serverless SQL pool.
 C. Use a Get Metadata activity in Azure Data Factory.
 D. Use a Conditional Split transformation in an Azure Synapse data flow. 
Correct Answer:
 B  Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools. Serverless SQL pool enables you to query data in your data lake. It offers a T-SQL query surface area that accommodates semi-structured and unstructured data queries. To support a smooth experience for in place querying of data that's located in Azure Storage files, serverless SQL pool uses the OPENROWSET function with additional capabilities. The easiest way to see to the content of your JSON file is to provide the file URL to the OPENROWSET function, specify csv FORMAT. 

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-json-files https://docs.microsoft.com/en-us/azure/synapseanalytics/sql/query-data-storage



You are designing a date dimension table in an Azure Synapse Analytics dedicated SQL pool. The date dimension table will be used by all the fact tables. Which distribution type should you recommend to minimize data movement? 
A. HASH
 B. REPLICATE
 C. ROUND_ROBIN

 Correct Answer: B 



























You are designing an anomaly detection solution for streaming data from an Azure IoT hub. The solution must meet the following requirements: ✑ Send the output to an Azure Synapse. ✑ Identify spikes and dips in time series data. ✑ Minimize development and configuration effort. Which should you include in the solution? A. Azure SQL Database B. Azure Databricks C. Azure Stream Analytics Correct Answer: C 



You are creating a new notebook in Azure Databricks that will support R as the primary language but will also support Scala and SQL. Which switch should you use to switch between languages? 

A. \\[]
 B. % 
C. \\[
D. @
Correct Answer: B 

 You can override the default language by specifying the language magic command % at the beginning of a cell. The supported magic commands are: %python, %r, %scala, and %sql.


 Reference: https://docs.microsoft.com/en-us/azure/databricks/notebooks/notebooks-use










You plan to build a structured streaming solution in Azure Databricks. The solution will count new events in five-minute intervals and report only events that arrive during the interval. The output will be sent to a Delta Lake table. Which output mode should you use? 
A. complete B. append C. update 

Correct Answer: A  Complete mode: You can use Structured Streaming to replace the entire table with every batch. Incorrect Answers: B: By default, streams run in append mode, which adds new records to the table. 

Reference: https://docs.databricks.com/delta/delta-streaming.html












You have a SQL pool in Azure Synapse that contains a table named dbo.Customers. The table contains a column name Email. You need to prevent nonadministrative users from seeing the full email addresses in the Email column. The users must see values in a format of aXXX@XXXX.com instead. What should you do? 
A. From the Azure portal, set a mask on the Email column.
 B. From the Azure portal, set a sensitivity classification of Confidential for the Email column. 
C. From Microsoft SQL Server Management Studio, set an email mask on the Email column. 
D. From Microsoft SQL Server Management Studio, grant the SELECT permission to the users for all the columns in the dbo.Customers table except Email. 
Correct Answer: B  The Email masking method, which exposes the first letter and replaces the domain with XXX.com using a constant string prefix in the form of an email address. Example: aXX@XXXX.com




You have an Azure Databricks workspace named workspace1 in the Standard pricing tier. Workspace1 contains an all-purpose cluster named cluster1. You need to reduce the time it takes for cluster1 to start and scale up. The solution must minimize costs. What should you do first? 

A. Upgrade workspace1 to the Premium pricing tier. B. Configure a global init script for workspace1. C. Create a pool in workspace1. D. Create a cluster policy in workspace1. 

Correct Answer: C  You can use Databricks Pools to Speed up your Data Pipelines and Scale Clusters Quickly. Databricks Pools, a managed cache of virtual machine instances that enables clusters to start and scale 4 times faster. 

Reference: https://databricks.com/blog/2019/11/11/databricks-pools-speed-up-data-pipelines.html



Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: In an Azure Synapse Analytics pipeline, you use a Get Metadata activity that retrieves the DateTime of the files. Does this meet the goal? 
A. Yes 
B. No 

Correct Answer: B  Instead use a serverless SQL pool to create an external table with the extra column. 

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/create-use-external-tables





Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: You use an Azure Synapse Analytics serverless SQL pool to create an external table that has an additional DateTime column. Does this meet the goal?

 A. Yes B. No 

Correct Answer: A  In dedicated SQL pools you can only use Parquet native external tables. Native external tables are generally available in serverless SQL pools. 


Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/create-use-external-tables




Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1. You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1. You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1. You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1. Solution: You use a dedicated SQL pool to create an external table that has an additional DateTime column. Does this meet the goal? 

A. Yes 
B. No Correct 
Answer: B  Instead use a serverless SQL pool to create an external table with the extra column. Note: In dedicated SQL pools you can only use Parquet native external tables. Native external tables are generally available in serverless SQL pools.

 Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/create-use-external-tables











You plan to deploy an app that includes an Azure SQL database and an Azure web app. The app has the following requirements: ✑ The web app must be hosted on an Azure virtual network. ✑ The Azure SQL database must be assigned a private IP address. ✑ The Azure SQL database must allow connections only from the virtual network. You need to recommend a solution that meets the requirements. What should you include in the recommendation? A. Azure Private Link B. a network security group (NSG) C. a database-level firewall D. a server-level firewall Correct Answer: A  

Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/private-endpoint-overview




You are planning a solution that will use Azure SQL Database. Usage of the solution will peak from October 1 to January 1 each year. During peak usage, the database will require the following: ✑ 24 cores ✑ 500 GB of storage ✑ 124 GB of memory ✑ More than 50,000 IOPS During periods of off-peak usage, the service tier of Azure SQL Database will be set to Standard. Which service tier should you use during peak usage? A. Business Critical B. Premium C. Hyperscale 

Correct Answer: A  


Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-single-databases#business-critical---provisioned-compute---



 


 





















 









Comments

  1. Hey! please share the pdf if possible. My email id is venkatajwala16@gmail.com

    ReplyDelete

Post a Comment

Popular posts from this blog

renewal of DP-300 exam questions

DP-300(30-60 questions)