Amazon DAS-C01 Dumps

Exam Code DAS-C01
Exam Name AWS Certified Data Analytics - Specialty
Update Date 25 Nov, 2025
Total Questions 157 Questions Answers With Explanation
$45

Introduction to Dumpsora Amazon DAS-C01 Practice Tests

When it comes to achieving IT certifications, the journey begins with reliable study materials and effective preparation strategies. At Dumpsora, we provide comprehensive practice tests, detailed study guides, and expert-designed test prep resources tailored for every DAS-C01 and Amazon. Preparing for certifications can often feel overwhelming, but having access to realistic full-length practice tests and structured learning content makes a significant difference.

Our mission at Dumpsora is simple: to make exam success achievable for everyone. Whether you’re preparing for your test day for the first time or retaking an exam to boost your score, our practice test resources are designed to mirror the real exam environment. This not only improves your confidence but also ensures you are ready for every type of question the exam may present.

Why DAS-C01 Practice Questions and Answers Matter for IT Certifications

Earning an IT certification is not just about theory; it’s about applying your knowledge under timed conditions. That’s why practice questions and answers play such a vital role in exam preparation. Dumpsora’s DAS-C01 practice tests are carefully structured to simulate the actual exam. You’ll encounter multiple-choice questions, scenario-based exercises, and real-world problems that reflect the format you’ll face on test day.

Here’s why Dumpsora’s practice questions and answers matter so much:
  • Real Exam Simulation: Our full-length practice test ensures you become familiar with the exam format. This eliminates surprises during the real exam.
  • Boosting Confidence: Attempting timed practice tests reduces anxiety, helping you walk into the exam center fully prepared.
  • Identifying Weak Areas: With detailed feedback, Dumpsora highlights the topics you need to revisit. This makes your test prep more focused.
  • Effective Learning: Unlike generic resources, our study guide works in tandem with practice questions and answers, ensuring holistic preparation.

For IT professionals, certifications are career-defining. That’s why Dumpsora ensures you not only study but also practice enough to confidently pass your certification on the first attempt.

Why Choose Dumpsora DAS-C01 Study Guide?

There are countless platforms offering IT exam preparation materials, but Dumpsora stands out because of its dedication to quality and learner success. Our study guide is not just another collection of notes—it’s a strategic roadmap that simplifies your learning journey.

Here’s what makes the Dumpsora DAS-C01 study guide different:
  • Organized Content: The guide breaks down complex concepts into easy-to-understand sections, making test prep manageable.
  • Vendor-Specific Focus: Each study guide is tailored for the specific Amazon DAS-C01, ensuring relevancy.
  • Combined with Practice Tests: The study guide works hand-in-hand with our practice tests, helping you revise and reinforce knowledge effectively.
  • Updated Material: We regularly update our content to match the latest exam patterns, so you’re always preparing with current information.

Unlike random notes or outdated resources, Dumpsora provides a structured learning path. With the study guide and practice test combination, you’re not just memorizing facts—you’re gaining the confidence to tackle the exam strategically.

Benefits of Using Dumpsora Practice Tests

Choosing Dumpsora’s practice tests for your exam preparation comes with a range of benefits that directly impact your success rate. We don’t just provide questions; we create a complete test prep ecosystem.

  1. Realistic Exam Environment
    Our full-length practice test mirrors the official exam format, ensuring you get accustomed to the pressure and timing. By the time test day arrives, you’ll feel like you’ve already taken the exam multiple times.
  2. Improved Time Management
    Attempting multiple practice tests helps you manage time effectively. You’ll know exactly how long to spend on each question and avoid running out of time.
  3. Measurable Progress
    With Dumpsora’s detailed reporting, you can track your improvements. Each practice test highlights your strengths and weaknesses, allowing you to refine your test prep.
  4. Cost-Effective Preparation
    Failing an IT certification exam can be expensive. By investing in Dumpsora’s practice tests and study guide, you significantly increase your chances of passing on the first try, saving both time and money. Plus, we even offer access to free practice tests, so you can experience our resources before committing.
  5. Confidence on Test Day
    The ultimate benefit is confidence. Walking into the exam center after completing several Dumpsora practice tests means you already know what to expect. This peace of mind is invaluable.
Final Thoughts

Preparing for an IT certification requires more than just reading textbooks. It’s about practice, strategy, and confidence. At Dumpsora, we provide a complete package—study guides, practice tests, full-length practice tests, and practice questions and answers—all designed to make your test prep effective and efficient.

If you’re aiming to pass your AWS Certified Data Analytics - Specialty on the very first attempt, Dumpsora is your trusted partner. With our practice tests and study guide, you’ll not only be ready for test day, but you’ll also set yourself up for long-term success in your IT career. And with the support of free online test prep, Dumpsora ensures that high-quality resources are accessible to every learner. Choose Dumpsora. Practice smart. Pass with confidence.

Add Your Review About Amazon DAS-C01 Exam Dumps
30 Review for Amazon DAS-C01 Exam Dumps
Harish Sethi

QuickSight dashboards and their integration with Redshift were explained clearly, which was tested in my exam.

Lakshmi Prasad

I liked that security and compliance topics were not ignored. Questions about encryption showed up in my test.

Kabir Malhotra

I agree, partitioning in S3 was covered well and very useful in the exam.

Manish Agarwal

Athena query optimization was explained with good detail, especially around partitioning. That came up on my test.

Sneha Iyer

QuickSight SPICE was touched on briefly, not in too much detail.

Sophia Taylor

Streaming vs batch processing concepts were tested, and I was ready thanks to the scenarios in the study material.

James Smith

I had about 4 questions related to Kinesis real-time analytics, so it’s worth preparing.

Arjun Nair

Visualization concepts were surprisingly well covered. I got a couple of QuickSight-related questions.

Jacob Carter

The career benefit of this certification is huge for data engineers and analysts. It definitely boosts credibility.

Rahul Venkatesh

Redshift performance tuning is a must-know, I had two direct questions.

Benjamin King

Does DAS-C01 include a lot of Glue Data Catalog questions, or just a few high-level ones?

Suresh Rao

Cost efficiency strategies for Redshift clusters were very close to the real test.

Ella Turner

Congrats! I also passed using the same material, very accurate to the real test.

Ishita Desai

Do we need to go very deep into EMR cluster setup or just understand basics for the exam?

Divya Krishnan

Athena federation was asked in a case study format.

Olivia Johnson

The focus on security in data pipelines helped me a lot, especially encryption and IAM role design.

Kiran Joshi

S3 partition pruning was covered really well. I had at least one exam question directly on that topic.

Varun Chawla

Visualization was lightly tested for me, only one QuickSight question.

Chloe Phillips

I felt more confident in designing real-time streaming pipelines after going through these practice sets.

Pooja Menon

Kinesis scaling scenarios came up in my exam too.

Samuel Baker

The exam material gave me a structured roadmap and avoided me wasting time on irrelevant services.

Mia Green

Glue Data Catalog integration with Athena and Redshift was explained step by step, which came up in my exam.

Matthew Evans

Partitioning strategies for data lakes are critical. The examples made it easy to answer exam questions correctly.

Henry Adams

The cost optimization part for large-scale analytics workloads was very well explained.

Siddharth Bansal

Security and IAM role design were definitely part of my test too.

George Mitchell

The study material made Kinesis concepts much clearer. Before, scaling streams confused me but now I can explain it easily.

Charlotte Young

Schema-on-read was in my test, glad it was covered in the material.

Rohit Reddy

How many questions in the exam are usually about real-time analytics with Kinesis and Firehose?

Hannah Reed

EMR was tested at a basic level in my case, mostly about node roles.

Daniel Wilson

I was nervous about Redshift performance tuning, but the material made it much easier to understand.

Alexander Wright

Yes, career value is definitely strong, especially for analytics-focused roles.

Vikram Chopra

I passed on my first attempt, and I credit the practice material for making me confident with complex services.

Anjali Gupta

Glue Data Catalog showed up in at least three of my questions.

Ethan Allen

How important is it to know about QuickSight SPICE engine details for the exam?

Neha Verma

The guide gave clear real-world examples for Kinesis Firehose transformations. I actually had a direct question on that.

Michael Brown

The cost efficiency notes saved me during a tough question.

Amit Sharma

Data lake best practices, like schema-on-read, were taught in a very practical way. That came up in my exam too.

Raj Mehta

Redshift Spectrum and how it interacts with external tables was one of the tricky parts, but the guide covered it very well.

Emma Clark

Athena federation queries were new to me, but the material covered them just in time.

William Lewis

I also faced Redshift Spectrum external tables in one tricky question.

Priya Patel

Glue ETL job orchestration was always tough for me, but the explanations simplified triggers and workflows nicely.

Ava Walker

Is this certification as valuable as Solutions Architect Professional in terms of career growth?

Isabella Hill

Error handling and retries in data pipelines were covered thoroughly. That helped me with tricky case study questions.

David Hall

The scenario-based practice sets were the most helpful part. They trained me for the case study type questions.

Grace Scott

I liked how the practice sets combined Redshift, Glue, and S3 into end-to-end pipelines.

Question # 1

A business intelligence (Bl) engineer must create a dashboard to visualize how oftencertain keywords are used in relation to others in social media posts about a public figure.The Bl engineer extracts the keywords from the posts and loads them into an AmazonRedshift table. The table displays the keywords and the count correspondingto each keyword.The Bl engineer needs to display the top keywords with more emphasis on the mostfrequently used keywords.Which visual type in Amazon QuickSight meets these requirements?

A. Bar charts
B. Word clouds
C. Circle packing
D. Heat maps

Question # 2

A company uses an Amazon Redshift provisioned cluster for data analysis. The data is notencrypted at rest. A data analytics specialist must implement a solution to encrypt the dataat rest.Which solution will meet this requirement with the LEAST operational overhead?

A. Use the ALTER TABLE command with the ENCODE option to update existing columnsof the Redshift tables to use LZO encoding.
B. Export data from the existing Redshift cluster to Amazon S3 by using the UNLOADcommand with the ENCRYPTED option. Create a new Redshift cluster with encryptionconfigured. Load data into the new cluster by using the COPY command.
C. Create a manual snapshot of the existing Redshift cluster. Restore the snapshot into anew Redshift cluster with encryption configured.
D. Modify the existing Redshift cluster to use AWS Key Management Service (AWS KMS)encryption. Wait for the cluster to finish resizing.

Question # 3

A company's data science team is designing a shared dataset repository on a Windowsserver. The data repository will store a large amount of training data that the datascience team commonly uses in its machine learning models. The data scientists create arandom number of new datasets each day.The company needs a solution that provides persistent, scalable file storage and highlevels of throughput and IOPS. The solution also must be highly available and mustintegrate with Active Directory for access control.Which solution will meet these requirements with the LEAST development effort?

A. Store datasets as files in an Amazon EMR cluster. Set the Active Directory domain forauthentication.
B. Store datasets as files in Amazon FSx for Windows File Server. Set the Active Directorydomain for authentication.
C. Store datasets as tables in a multi-node Amazon Redshift cluster. Set the ActiveDirectory domain for authentication.
D. Store datasets as global tables in Amazon DynamoDB. Build an application to integrateauthentication with the Active Directory domain.

Question # 4

A company is creating a data lake by using AWS Lake Formation. The data that will bestored in the data lake contains sensitive customer information and must be encrypted atrest using an AWS Key Management Service (AWS KMS) customer managed key to meetregulatory requirements.How can the company store the data in the data lake to meet these requirements?

A. Store the data in an encrypted Amazon Elastic Block Store (Amazon EBS) volume.Register the Amazon EBS volume with Lake Formation.
B. Store the data in an Amazon S3 bucket by using server-side encryption with AWS KMS(SSE-KMS). Register the S3 location with Lake Formation.
C. Encrypt the data on the client side and store the encrypted data in an Amazon S3bucket. Register the S3 location with Lake Formation.
D. Store the data in an Amazon S3 Glacier Flexible Retrieval vault bucket. Register the S3Glacier Flexible Retrieval vault with Lake Formation.

Question # 5

A financial company uses Amazon Athena to query data from an Amazon S3 data lake.Files are stored in the S3 data lake in Apache ORC format. Data analysts recentlyintroduced nested fields in the data lake ORC files, and noticed that queries are takinglonger to run in Athena. A data analysts discovered that more data than what is required isbeing scanned for the queries.What is the MOST operationally efficient solution to improve query performance?

A. Flatten nested data and create separate files for each nested dataset.
B. Use the Athena query engine V2 and push the query filter to the source ORC file.
C. Use Apache Parquet format instead of ORC format.
D. Recreate the data partition strategy and further narrow down the data filter criteria.

Question # 6

A company collects data from parking garages. Analysts have requested the ability to runreports in near real time about the number of vehicles in each garage.The company wants to build an ingestion pipeline that loads the data into an AmazonRedshift cluster. The solution must alert operations personnel when the number of vehiclesin a particular garage exceeds a specific threshold. The alerting query will use garagethreshold values as a static reference. The threshold values are stored inAmazon S3.What is the MOST operationally efficient solution that meets these requirements?

A. Use an Amazon Kinesis Data Firehose delivery stream to collect the data and to deliverthe data to Amazon Redshift. Create an Amazon Kinesis Data Analytics application thatuses the same delivery stream as an input source. Create a reference data source inKinesis Data Analytics to temporarily store the threshold values from Amazon S3 and tocompare the number of vehicles in a particular garage to the corresponding thresholdvalue. Configure an AWS Lambda function to publish an Amazon Simple NotificationService (Amazon SNS) notification if the number of vehicles exceeds the threshold.
B. Use an Amazon Kinesis data stream to collect the data. Use an Amazon Kinesis DataFirehose delivery stream to deliver the data to Amazon Redshift. Create another Kinesisdata stream to temporarily store the threshold values from Amazon S3. Send the deliverystream and the second data stream to Amazon Kinesis Data Analytics to compare thenumber of vehicles in a particular garage to the corresponding threshold value. Configurean AWS Lambda function to publish an Amazon Simple Notification Service (Amazon SNS)notification if the number of vehicles exceeds the threshold.
C. Use an Amazon Kinesis Data Firehose delivery stream to collect the data and to deliverthe data to Amazon Redshift. Automatically initiate an AWS Lambda function that queriesthe data in Amazon Redshift. Configure the Lambda function to compare the number ofvehicles in a particular garage to the correspondingthreshold value from Amazon S3. Configure the Lambda function to also publish an Amazon Simple Notification Service(Amazon SNS) notification if the number of vehicles exceeds the threshold.
D. Use an Amazon Kinesis Data Firehose delivery stream to collect the data and to deliverthe data to Amazon Redshift. Create an Amazon Kinesis Data Analytics application thatuses the same delivery stream as an input source. Use Kinesis Data Analytics to comparethe number of vehicles in a particular garage to the corresponding threshold value that isstored in a table as an in-application stream. Configure an AWS Lambda function as anoutput for the application to publish an Amazon Simple Queue Service (Amazon SQS)notification if the number of vehicles exceeds the threshold.

Question # 7

A company is designing a data warehouse to support business intelligence reporting. Userswill access the executive dashboard heavily each Monday and Friday morningfor I hour. These read-only queries will run on the active Amazon Redshift cluster, whichruns on dc2.8xIarge compute nodes 24 hours a day, 7 days a week. There arethree queues set up in workload management: Dashboard, ETL, and System. The AmazonRedshift cluster needs to process the queries without wait time.What is the MOST cost-effective way to ensure that the cluster processes these queries?

A. Perform a classic resize to place the cluster in read-only mode while adding anadditional node to the cluster.
B. Enable automatic workload management.
C. Perform an elastic resize to add an additional node to the cluster.
D. Enable concurrency scaling for the Dashboard workload queue.

Question # 8

A company analyzes historical data and needs to query data that is stored in Amazon S3.New data is generated daily as .csv files that are stored in Amazon S3. The company'sdata analysts are using Amazon Athena to perform SQL queries against a recent subset ofthe overall data.The amount of data that is ingested into Amazon S3 has increased to 5 PB over time. Thequery latency also has increased. The company needs to segment the data to reduce theamount of data that is scanned.Which solutions will improve query performance? (Select TWO.)Use MySQL Workbench on an Amazon EC2 instance. Connect to Athena by using a JDBCconnector. Run the query from MySQL Workbench instead ofAthena directly.

A. Configure Athena to use S3 Select to load only the files of the data subset.
B. Create the data subset in Apache Parquet format each day by using the AthenaCREATE TABLE AS SELECT (CTAS) statement. Query the Parquet data.
C. Run a daily AWS Glue ETL job to convert the data files to Apache Parquet format and topartition the converted files. Create a periodic AWS Glue crawler to automatically crawl the partitioned data each day.
D. Create an S3 gateway endpoint. Configure VPC routing to access Amazon S3 throughthe gateway endpoint.

Question # 9

A company wants to use a data lake that is hosted on Amazon S3 to provide analyticsservices for historical data. The data lake consists of 800 tables but is expected to grow tothousands of tables. More than 50 departments use the tables, and each department hashundreds of users. Different departments need access to specific tables and columns. Which solution will meet these requirements with the LEAST operational overhead?

A. Create an 1AM role for each department. Use AWS Lake Formation based accesscontrol to grant each 1AM role access to specific tables and columns. Use Amazon Athenato analyze the data.
B. Create an Amazon Redshift cluster for each department. Use AWS Glue to ingest intothe Redshift cluster only the tables and columns that are relevant to that department.Create Redshift database users. Grant the users access to the relevant department'sRedshift cluster. Use Amazon Redshift to analyze the data.
C. Create an 1AM role for each department. Use AWS Lake Formation tag-based accesscontrol to grant each 1AM roleaccess to only the relevant resources. Create LF-tags that are attached to tables andcolumns. Use Amazon Athena to analyze the data.
D. Create an Amazon EMR cluster for each department. Configure an 1AM service role foreach EMR cluster to access
E. relevant S3 files. For each department's users, create an 1AM role that provides accessto the relevant EMR cluster. Use Amazon EMR to analyze the data.

Question # 10

A data analyst is designing an Amazon QuickSight dashboard using centralized sales datathat resides in Amazon Redshift. The dashboard must be restricted so that a salesperson in Sydney, Australia, can see only the Australia view and that a salesperson in New Yorkcan see only United States (US) data.What should the data analyst do to ensure the appropriate data security is in place?

A. Place the data sources for Australia and the US into separate SPICE capacity pools.
B. Set up an Amazon Redshift VPC security group for Australia and the US.
C. Deploy QuickSight Enterprise edition to implement row-level security (RLS) to the salestable.
D. Deploy QuickSight Enterprise edition and set up different VPC security groups forAustralia and the US.

Question # 11

A gaming company is building a serverless data lake. The company is ingesting streamingdata into Amazon Kinesis Data Streams and is writing the data to Amazon S3 throughAmazon Kinesis Data Firehose. The company is using 10 MB as the S3 buffer size and isusing 90 seconds as the buffer interval. The company runs an AWS Glue ET L job tomerge and transform the data to a different format before writing the data back to Amazon S3.Recently, the company has experienced substantial growth in its data volume. The AWSGlue ETL jobs are frequently showing an OutOfMemoryError error.Which solutions will resolve this issue without incurring additional costs? (Select TWO.)

A. Place the small files into one S3 folder. Define one single table for the small S3 files inAWS Glue Data Catalog. Rerun the AWS Glue ET L jobs against this AWS Glue table.
B. Create an AWS Lambda function to merge small S3 files and invoke them periodically.Run the AWS Glue ETL jobs after successful completion of the Lambda function.
C. Run the S3DistCp utility in Amazon EMR to merge a large number of small S3 filesbefore running the AWS Glue ETL jobs.
D. Use the groupFiIes setting in the AWS Glue ET L job to merge small S3 files and rerunAWS Glue E TL jobs.
E. Update the Kinesis Data Firehose S3 buffer size to 128 MB. Update the buffer interval to900 seconds.

Question # 12

A retail company has 15 stores across 6 cities in the United States. Once a month, thesales team requests a visualization in Amazon QuickSight that provides the ability to easilyidentify revenue trends across cities and stores.The visualization also helps identify outliersthat need to be examined with further analysis.Which visual type in QuickSight meets the sales team's requirements?

A. Geospatial chart
B. Line chart
C. Heat map
D. Tree map

Question # 13

A company uses Amazon EC2 instances to receive files from external vendors throughouteach day. At the end of each day, the EC2 instances combine the files into a single file,perform gzip compression, and upload the single file to an Amazon S3 bucket. The totalsize of all the files is approximately 100 GB each day.When the files are uploaded to Amazon S3, an AWS Batch job runs a COPY command toload the files into an Amazon Redshift cluster.Which solution will MOST accelerate the COPY process?

A. Upload the individual files to Amazon S3. Run the COPY command as soon as the filesbecome available.
B. Split the files so that the number of files is equal to a multiple of the number of slices inthe Redshift cluster. Compress and upload the files to Amazon S3. Run the COPYcommand on the files.
C. Split the files so that each file uses 50% of the free storage on each compute node inthe Redshift cluster. Compress and upload the files to Amazon S3. Run the COPYcommand on the files.
D. pply sharding by breaking up the files so that the DISTKEY columns with the samevalues go to the same file. Compress and upload the sharded files to Amazon S3. Run theCOPY command on the files.

Question # 14

A bank is building an Amazon S3 data lake. The bank wants a single data repository forcustomer data needs, such as personalized recommendations. The bank needs to useAmazon Kinesis Data Firehose to ingest customers' personal information, bank accounts,and transactions in near real time from a transactional relational database. All personally identifiable information (Pll) that is stored in the S3 bucket must be masked.The bank has enabled versioning for the S3 bucket.Which solution will meet these requirements?

A. Invoke an AWS Lambda function from Kinesis Data Firehose to mask the PII beforeKinesis Data Firehose delivers the data to the S3 bucket.
B. Use Amazon Macie to scan the S3 bucket. Configure Macie to discover Pll. Invoke anAWS Lambda function from S3 events to mask the Pll.
C. Configure server-side encryption (SSE) for the S3 bucket. Invoke an AWS Lambdafunction from S3 events to mask the PII.
D. Create an AWS Lambda function to read the objects, mask the Pll, and store the objectsback with same key. Invoke the Lambda function from S3 events.

Question # 15

A company developed a new voting results reporting website that uses Amazon KinesisData Firehose to deliver full logs from AWS WAF to an Amazon S3 bucket. The company isnow seeking a solution to perform this infrequent data analysis with data visualizationcapabilities in a way that requires minimal development effort.Which solution MOST cost-effectively meets these requirements?

A. Use an AWS Glue crawler to create and update a table in the AWS Glue data catalogfrom the logs. Use Amazon Athena to perform ad-hoc analyses. Develop datavisualizations by using Amazon QuickSight.
B. Configure Kinesis Data Firehose to deliver the logs to an Amazon OpenSearch Servicecluster. Use OpenSearch Service REST APIs to analyze the data. Visualize the data bybuilding an OpenSearch Service dashboard.
C. Create an AWS Lambda function to convert the logs to CSV format. Add the Lambdafunction to the Kinesis Data Firehose transformation configuration. Use Amazon Redshift toperform a one-time analysis of the logs by using SQL queries. Develop data visualizationsby using Amazon QuickSight.
D. Create an Amazon EMR cluster and use Amazon S3 as the data source. Create anApache Spark job to perform a one-time analysis of the logs. Develop data visualizationsby using Amazon QuickSight.

Question # 16

A large ecommerce company uses Amazon DynamoDB with provisioned read capacity andauto scaled write capacity to store its product catalog. The company uses Apache HiveQLstatements on an Amazon EMR cluster to query the DynamoDB table. After the companyannounced a sale on all of its products, wait times for each query have increased. The dataanalyst has determined that the longer wait times are being caused by throttling whenquerying the table.Which solution will solve this issue?

A. Increase the size of the EMR nodes that are provisioned.
B. Increase the number of EMR nodes that are in the cluster.
C. Increase the DynamoDB table's provisioned write throughput.
D. Increase the DynamoDB table's provisioned read throughput.

Question # 17

A social media company is using business intelligence tools to analyze data for forecasting.The company is using Apache Kafka to ingest data. The company wants to build dynamicdashboards that include machine learning (ML) insights to forecast key business trends.The dashboards must show recent batched data that is not more than 75 minutes old.Various teams at the company want to view the dashboards by using Amazon QuickSightwith ML insights.Which solution will meet these requirements?

A. Replace Kafka with Amazon Managed Streaming for Apache Kafka (Amazon MSK). UseAWS Data Exchange to store the data in Amazon S3. Use SPICE in QuickSight Enterpriseedition to refresh the data from Amazon S3 each hour. Use QuickSight to create a dynamicdashboard that includes forecasting and ML insights.
B. Replace Kafka with an Amazon Kinesis data stream. Use AWS Data Exchange to storethe data in Amazon S3. Use SPICE in QuickSight Standard edition to refresh the data fromAmazon S3 each hour. Use QuickSight to create a dynamic dashboard that includesforecasting and ML insights.
C. Configure the Kafka-Kinesis-Connector to publish the data to an Amazon Kinesis DataFirehose delivery stream. Configure the delivery stream to store the data in Amazon S3with a max buffer size of 60 seconds. Use SPICE in QuickSight Enterprise edition torefresh the data from Amazon S3 each hour. Use QuickSight to create a dynamicdashboard that includes forecasting and ML insights.
D. Configure the Kafka-Kinesis-Connector to publish the data to an Amazon Kinesis DataFirehose delivery stream. Configure the delivery stream to store the data in Amazon S3with a max buffer size of 60 seconds. Refresh the data in QuickSight Standard edition SPICE from Amazon S3 by using a scheduled AWS Lambda function. Configure theLambda function to run every 75 minutes and to invoke the QuickSight API to create adynamic dashboard that includes forecasting and ML insights.

Question # 18

A company recently created a test AWS account to use for a development environmentThe company also created a production AWS account in another AWS Region As part ofits security testing the company wants to send log data from Amazon CloudWatch Logs inits production account to an Amazon Kinesis data stream in its test accountWhich solution will allow the company to accomplish this goal?

A. Create a subscription filter in the production accounts CloudWatch Logs to target theKinesis data stream in the test account as its destination In the test account create an 1AMrole that grants access to the Kinesis data stream and the CloudWatch Logs resources inthe production account
B. In the test account create an 1AM role that grants access to the Kinesis data stream andthe CloudWatch Logs resources in the production account Create a destination datastream in Kinesis Data Streams in the test account with an 1AM role and a trust policy thatallow CloudWatch Logs in the production account to write to the test account
C. In the test account, create an 1AM role that grants access to the Kinesis data streamand the CloudWatch Logs resources in the production account Create a destination datastream in Kinesis Data Streams in the test account with an 1AM role and a trust policy thatallow CloudWatch Logs in the production account to write to the test account
D. Create a destination data stream in Kinesis Data Streams in the test account with an1AM role and a trust policy that allow CloudWatch Logs in the production account to writeto the test account Create a subscription filter in the production accounts CloudWatch Logsto target the Kinesis data stream in the test account as its destination

Question # 19

A banking company wants to collect large volumes of transactional data using AmazonKinesis Data Streams for real-time analytics. The company usesPutRecord to send data toAmazon Kinesis, and has observed network outages during certain times of the day. Thecompany wants to obtain exactly once semantics for the entire processing pipeline.What should the company do to obtain these characteristics?

A. Design the application so it can remove duplicates during processing be embedding aunique ID in each record.
B. Rely on the processing semantics of Amazon Kinesis Data Analytics to avoid duplicateprocessing of events.
C. Design the data producer so events are not ingested into Kinesis Data Streams multipletimes.
D. Rely on the exactly one processing semantics of Apache Flink and Apache SparkStreaming included in Amazon EMR.

Question # 20

A company uses Amazon kinesis Data Streams to ingest and process customer behaviorinformation from application users each day. A data analytics specialist notices that its datastream is throttling. The specialist has turned on enhanced monitoring for the Kinesis datastream and has verified that the data stream did not exceed the data limits. The specialistdiscovers that there are hot shardsWhich solution will resolve this issue?

A. Use a random partition key to ingest the records.
B. Increase the number of shards Split the size of the log records.
C. Limit the number of records that are sent each second by the producer to match thecapacity of the stream.
D. Decrease the size of the records that are sent from the producer to match the capacityof the stream.