Certbus > Amazon > AWS Certified Specialty > DBS-C01 > DBS-C01 Online Practice Questions and Answers

DBS-C01 Online Practice Questions and Answers

Questions 4

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials.

Which combination of steps should a database specialist take to meet this requirement? (Choose three.)

A. Extend the on-premises Active Directory to AWS by using AD Connector.

B. Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

C. Create a directory by using AWS Directory Service for Microsoft Active Directory.

D. Create an Active Directory domain controller on Amazon EC2.

E. Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

F. Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.

Browse 321 Q&As
Questions 5

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/ O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

A. Restart the application tool used to execute queries.

B. Change to a database instance class with higher throughput.

C. Convert from Single-AZ to Multi-AZ.

D. Increase the I/O parameter in Amazon RDS Enhanced Monitoring.

E. Convert from General Purpose to Provisioned IOPS (PIOPS).

Browse 321 Q&As
Questions 6

A Database Specialist needs to define a database migration strategy to migrate an on- premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.

Which approach should the Database Specialist take?

A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glue. Load the data from the S3 bucket to the Aurora DB cluster.

B. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball appliance. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluster. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.

C. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migration. Use AWS DMS to perform the full load and change data capture (CDC) tasks.

D. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instance. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Browse 321 Q&As
Questions 7

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.

What should a database specialist do to accommodate the changing requirements for DAX?

A. Increase the number of nodes in the existing DAX cluster.

B. Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

C. Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

D. Modify the node type in the existing DAX cluster.

Browse 321 Q&As
Questions 8

A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.

Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.

Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

A. Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.

B. Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.

C. Move the DB instance to a private subnet using AWS DMS.

D. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.

E. Disable the publicly accessible setting.

F. Connect to the DB instance using private IPs and a VPN.

Browse 321 Q&As
Questions 9

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B. Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Browse 321 Q&As
Questions 10

A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?

A. Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.

B. Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.

C. Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.

D. Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

Browse 321 Q&As
Questions 11

A startup company is building a new application to allow users to visualize their on- premises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The

application should be able to identify:

The networks and routes affected if a particular component fails. The networks that have redundant routes between them. The networks that do not have redundant routes between them.

The fastest path between two networks.

Which database engine meets these requirements?

A. Amazon Aurora MySQL

B. Amazon Neptune

C. Amazon ElastiCache for Redis

D. Amazon DynamoDB

Browse 321 Q&As
Questions 12

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.

Which solution on AWS will meet these requirements with the LEAST operational overhead?

A. Deploy an Amazon RDS DB instance with a read replica.

B. Deploy an Amazon RDS Multi-AZ DB instance.

C. Deploy Amazon DynamoDB global tables.

D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.

Browse 321 Q&As
Questions 13

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

A. Amazon Aurora MySQL Multi-AZ DB cluster

B. Amazon Keyspaces (for Apache Cassandra)

C. Amazon DynamoDB table with DynamoDB auto scaling

D. Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone

Browse 321 Q&As
Questions 14

A company has an application that uses an Amazon DynamoDB table as its data store. During normal business days, the throughput requirements from the application are uniform and consist of 5 standard write calls per second to the DynamoDB table. Each write call has 2 KB of data.

For 1 hour each day, the company runs an additional automated job on the DynamoDB table that makes 20 write requests per second. No other application writes to the DynamoDB table. The DynamoDB table does not have to meet any additional capacity requirements.

How should a database specialist configure the DynamoDB table's capacity to meet these requirements MOST cost-effectively?

A. Use DynamoDB provisioned capacity with 5 WCUs and auto scaling.

B. Use DynamoDB provisioned capacity with 5 WCUs and a write-through cache that DynamoDB Accelerator (DAX) provides.

C. Use DynamoDB provisioned capacity with 10 WCUs and auto scaling.

D. Use DynamoDB provisioned capacity with 10 WCUs and no auto scaling.

Browse 321 Q&As
Questions 15

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

A. Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

B. Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

C. Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

D. Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Browse 321 Q&As
Questions 16

A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.

Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)

A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.

B. Use Oracle's Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.

C. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.

D. Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

E. Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.

Browse 321 Q&As
Questions 17

A worldwide digital advertising corporation collects browser information in order to provide targeted visitors with contextually relevant pictures, websites, and connections. A single page load may create many events, each of which must be kept separately. A single event may have a maximum size of 200 KB and an average size of 10 KB. Each page load requires a query of the user's browsing history in order to deliver suggestions for targeted advertising. The advertising corporation anticipates daily page views of more than 1 billion from people in the United States, Europe, Hong Kong, and India. The information structure differs according to the event. Additionally, browsing information must be written and read with a very low latency to guarantee that consumers have a positive viewing experience.

Which database solution satisfies these criteria?

A. Amazon DocumentDB

B. Amazon RDS Multi-AZ deployment

C. Amazon DynamoDB global table

D. Amazon Aurora Global Database

Browse 321 Q&As
Questions 18

A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.

Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.

B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.

C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.

D. Use Amazon QuickSight to view the SQL statement being run.

E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.

Browse 321 Q&As
Exam Code: DBS-C01
Exam Name: AWS Certified Database - Specialty (DBS-C01)
Last Update: Apr 09, 2024
Questions: 321 Q&As

PDF

$45.99

VCE

$49.99

PDF + VCE

$59.99