Pca2 HTML
Pca2 HTML
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using an SQL interface. How
should you store the data to optimize it for ease of analysis?
Explanation:
Correct Answer - A
Option A is correct. BigQuery is the only of these Google products that support an SQL interface and can handle petabyte data.
Question 2Correct
Domain: Other
If you have object versioning enabled on a multi-regional bucket, what will the following lifecycle config file do?
A. Archive objects older than 30 days (the second rule doesn’t do anything)
B. Delete objects older than 30 days (the second rule doesn’t do anything)
C. Archive objects older than 30 days and move objects to Coldline Storage after 365 days
D. Delete all the versions that are not live and 30 days old. Move the remaining current versions to Coldline after 365 days.right
Explanation:
Correct Answer is D
With formatted json, it’ll be easy to understand without lots of knowledge about Storage lifecycle syntax.
[1] Create lifecycle [2] rule to take [5] action of [6] delete [8] if the object is [9] 30 days old and [10] the version isLive: false (non-current version)
[12] And [14] take action of [15] SetStorageClass to [16] COLDLINE [18] if the object is [19] 365 days old and [20] if its StorageClass matches [21] MULTI-REGIONAL
The Correct Answer is D: Delete all the versions that are not current and 30 days old. Move the remaining current versions to Coldline after 365 days.
Note:
a. isLive: false => this means object should be declared as non-current verison
In another way:
a. 30 days
b. isLive: false
if these 2 condition meets: the object will be have action. i.e. delete
Here IsLive means object need to be tagged as non-current which means versioning is enabled for that bucket.
a. 365 days
b. MatchesStorageClass: Multi-Regional
This means if Object is in Multi-Regional for over 365 days then the action will trigger Move the object to StorageClass:ColdLine
https://cloud.google.com/solutions/data-lifecycle-cloud-platform
Ask our Experts
View Queries
Did you like this Question?
Question 3Correct
Domain: Other
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and
normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk. What should they
change to get better performance from this system?
Explanation:
Correct Answer E
All of other answers are either not applicable or not specified by the question scenarios.
The following console screenshot show the effect to IO performance by changing memory or disk – it’s self-explained (please enlarge to see the details). Answer E is a clear straightforward winner
over answer C as well as others.
Increase disk size to 500GB take significant effect on IO performance than original configuration and higher memory configuration.
Taking IOPS for read as example, all instances are configure with 8vCPU:
80G DISK/30G MEM = 2400 IOPS; 500G DISK/30G MEM = 15000 IOPS; 80G Disk/52G MEM=2400 IOPS
Ask our Experts
View Queries
Did you like this Question?
Question 4Correct
Domain: Other
TerramEarth needs to migrate legacy monolithic applications into containerized RESTful microservices.
The development team is experimenting with the use of packaged procedures with containers in a completely serverless environment, using Cloud Run.
Before migrating the existing code into production it was decided to perform a lift and shift of the monolithic application and to develop the new features that are required with serverless
microservices.
So, they want to carry out a gradual migration, activating the new microservice functionalities while maintaining the monolithic application for all the other activities.
The problem now is how to integrate the legacy monolithic application with the new microservices to have a consistent interface and simple management.
Explanation:
The first solution (A+D) uses HTTP(S) Load Balancing and NEGs.
Network endpoint groups (NEG) let you design serverless backend endpoints for external HTTP(S) Load Balancing. Serverless NEGs became target proxies and the forwarding is performed with
the use of URL maps. In this way, you may integrate seamlessly with the legacy application.
An alternative solution is API Management, which creates a facade and integrates different applications. GCP has 3 API Management solutions: Cloud Endpoints, Apigee, and API Gateway. API
Gateway is only for serverless back ends.
B is wrong because developing a proxy inside the monolithic application for integration means, keep on updating the old app with possible service interruptions and useless toil.
E is wrong because App Engine’s flexible edition manages containers but cannot integrate the legacy monolithic application with the new functions.
https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts
https://cloud.google.com/endpoints
Ask our Experts
View Queries
Did you like this Question?
Question 5Correct
Domain: Other
You can SSH into an instance from another instance in the same VPC by its internal IP address, but not its external IP address. What is the possible reason for this scenario?
A. The outgoing instance does not have correct permission granted to its service account.
B. The internal IP address is disabled.
C. The SSH firewall rule is restricted only to the internal VPC.right
D. The receiving instance has an ephemeral address instead of a reserved address.
Explanation:
Correct Answer - C
Instances can have both Internal and External IP addresses. When connecting to another instance by its external address, you're going out of your internal network to the external Internet and coming
back to access the instance by its external address. If traffic is restricted to the local VPC, it will reject this attempt as it is coming from an external source.
Reference:
https://cloud.google.com/vpc/docs/firewalls#firewall_rules_in
Question 6Incorrect
Domain: Other
You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary cloud services that run on Google Compute Engine with Cloud Bigtable. Which three
requirements should they include? Choose 3 answers.
A. Ensure all third-party systems your services used are capable of handling high load.
B. Instrument the load-testing tool and the target services with detailed logging and metrics collection.right
C. Create a separate Google Cloud project to use for the load-testing environment.right
D. Instrument the production services to record every transaction for replay by the load- testing tool.wrong
E. Ensure that the load tests validate the performance of Cloud Bigtable.right
F. Schedule the load-testing tool to regularly run against the production environment.
Explanation:
Feedback
A - Ensure all third-party systems your services used are capable of handling high load
This is out of scope – it’s beyond your control. You may test them, but you cannot control them. If you have doubts the third-party dependencies causing the overall performance issue, you may use
integration test to identify and isolate them. This should not be your focus.
B (Correct answer) - Instrument the load-testing tool and the target services with detailed logging and metrics collection.
This is normal requirement and practice for load testing to collect testing results with detailed measurable metric and historical logs otherwise load testing would be meaningless
C (Correct answer) - Create a separate Google Cloud project to use for the load-testing environment.
D - Instrument the production services to record every transaction for replay by the load- testing tool.
This would be way too much: not only the detailed instrumentation may impact the production performance, it’s also possible the instrumentation itself distorts the test results - remember, Bigtable
is for multiple terabyte or even petabyte NoSQL database for high throughput with low latency read and write.
E (Correct answer) - Ensure that the load tests validate the performance of Cloud Bigtable.
At first thought, it seems Bigtable is managed service seamlessly scaling, there is no need to load test it. But you probably already know, there are many factors impacting the Bigtable performance.
The common one of them, poorly designed table structure.
With poorly performed BigTable, the cluster would continue to scale, add more and more nodes, as the load increasing, cost more and more. Not matter how good is the cluster managed, this is
exact situation that needs to prevent. In fact, tests validate the performance of Cloud Bigtable is one of the most important testing goals for BigTable performance.
F - Schedule the load-testing tool to regularly run against the production environment.
You should not do load testing RGULARLY against production environment; In fact, somebody has suggested Load test in production should avoid entirely.
More Resource
Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should
you do?
Explanation:
Correct answer B
Feedback
A is not correct because this will just give you the spend but will not alert you when you approach the limit.
B Is correct because a budget alert will warn you when you reach the limits set.
C Is not correct because those budgets are only on App Engine, not other GCP resources. Furthermore, this makes subsequent requests fail, rather than alert you in time so you can mitigate
appropriately.
D is not correct because if you exceed the budget, you will still be billed for it. Furthermore, there is no alerting when you hit that limit by GCP.
Reference
Question 8Correct
Domain: Other
MountKirk Games uses Kubernetes and Google Kubernetes Engine. For the management, it is important to use an open platform, cloud-native, and without vendor lock-ins.
But they also need to use advanced APIs of GCP services and want to do it securely using standard methodologies, following Google-recommended practices but above all efficiently with maximum
security.
A. API keys
B. Service Accounts
C. Workload identityright
D. Workload identity federation
Explanation:
Correct Answer: C
The preferred way to access services in a secured and authorized way is with Kubernetes service accounts, which are not the same as GCP service accounts.
With Workload Identity, you can configure a Kubernetes service account so that workloads will automatically authenticate as the corresponding Google service account when accessing GCP APIs.
Moreover, Workload Identity is the recommended way for applications in GKE to securely access GCP APIs because it lets you manage identities and authorization in a standard, secure and easy
way.
A is wrong because API keys offer minimal security and no authorization, just identification.
B is wrong because GCP Service Accounts are GCP proprietary. Kubernetes is open and works with Kubernetes service accounts.
D is wrong because Workload identity federation is useful when you have an external identity provider such as Amazon Web Services (AWS), Azure Active Directory (AD), or an OIDC-
compatible provider.
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
https://cloud.google.com/docs/authentication
Question 9Correct
Domain: Other
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spending. What should you do?
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil.
B. Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days.
C. Schedule a cron script using gsutil ls -1 gs://backups/** to find and remove items older than 90 days and schedule it with cron.
D. Write a lifecycle management rule in JSON and push it to the bucket with gsutil.right
Explanation:
Correct Answer - D
Opion A – Write a lifecycle management rule in XML and push it to the bucket with gsutil: you can set lifecycle configuration for an existing bucket with a PUT API call request (NOT the “gsutil
lifecycle” command!). You must include an XML document in the request body that contains the lifecycle configuration. https://cloud.google.com/storage/docs/xml-api/put-bucket-
lifecycle#request_body_elements
B and C can be eliminated. They do the similar thing slightly different: write script listing object and get their timestamps
If an object's age is older than 90 days, do deleting, then schedule a cron job for the recurring process.
However, gsutil ls -l/-lr does not list versioned objects. To list versioned object, need gsutil ls -a. Using this approach, versioned archives won’t be deleted.
D (Correct answer) – Write a lifecycle management rule in JSON and push it to the bucket with gsutil.
· Create a .json file with the lifecycle configuration rules you would like to apply (see examples below).
The following lifecycle configuration JSON document specifies that all objects in this bucket that are more than 90 days old will be deleted automatically:
{
“rule”:
[
{
“action”: {“type”: “Delete”},
“condition”: {“age”: 90}
}
]
}
Ask our Experts
View Queries
Did you like this Question?
Question 10Correct
Domain: Other
You want to enable your running Google Kubernates Engine cluster to scale as demand for your application changes. What should you do? Select one
A. Add additional nodes to your Kubernates Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE --tags enable-autoscaling max-nodes-10
C. Update the existing Kubernates Engine cluster with the following command: gcloud container clusters update CLUSTER_NAME --enable-autoscaling --min-nodes=1 --max-nodes=10right
D. Create a new Kubernates Engine cluster with the following command: gcloud container clusters create CLUSTER_NAME --enable-autoscaling --min-nodes=1 --max-nodes=10 and
redeploy your application
Explanation:
A - Add additional nodes to your Container Engine cluster using the following command
B - Add a tag to the instances in the cluster with the following command:
First this is command for adding tags to instance, second simply adding tag will not automatically enable autoscaling.
C (Correct answer) - Update the existing Container Engine cluster with the following command:
This is the right command, please see “Enabling autoscaling for an existing node pool”
D - Create a new Container Engine cluster with the following command, and redeploy your application
This is the command enable autoscaling when you create the cluster not for running cluster. Please see “Creating a cluster with autoscaling”
Reference Resource
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Note:
Question 11Correct
Domain: Other
Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and
resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.
What should you do to isolate development environments from staging and production?
A. Create a project for development and test and another for staging and production.
B. Create a network for development and test and another for staging and production.
C. Create one subnetwork for development and another for staging and production.
D. Create one project for development and test, a second for staging and a third for production.right
Explanation:
Correct Answer D
Explanation
D (Correct answer) – among the available answers, D is the closest solution to meet the isolate and inter-access requirements.
In this example, you’ll create one Host project for Developer and Tester and another Host project for staging, and the third one for production. Staging and Production environments can access
resources as per the cross accessible Service Accounts created for just the required needs.
B – This is incomplete and not the best solution. Network isolation is for separation of resources communication, the project is for IAM resource access control.
If the question meant putting resources in different networks but in the same project, it’s not enough to separate developer from access Stage/Product unless access policy is set at each specific
resource level which is not only against the best practice but also hard to manage especially if you consider Mountkirk Games is not a small shop.
Answer A indeed enables the isolation but sharing staging and Production in the same project might have some cross access of the resources by human error.
On the other hand, if the quest meant Developer and tester are in the same group called Development (based on “What should you do to isolate development environments from staging and
production?”), D could be an answer since it isolates the development from staging and production, though no inter-project access issues addressed.
Overall, judging from Mountkirk Games application, environments, and company size, most likely they’ll have separated Development and Testing while they do share access to some resources
such as access testing data as well as computing resources. So, Answer option D is closer to the requirements
Ask our Experts
View Queries
Did you like this Question?
Question 12Correct
Domain: Other
The application reliability team at your company has added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis
The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss. Which process should you implement?
A. Append metadata to file body. Compress individual files. Name files with a random prefix pattern. Save files to one bucketright
B. Batch every 10,000 events with a single manifest file for metadata. Compress event files and manifest file into a single archive file. Name files using serverName-EventSequence. Create a
new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket.
C. Compress individual files. Name files with serverName-EventSequence. Save files to one bucket Set custom metadata headers for each object after saving.
D. Append metadata to file body. Compress individual files. Name files with serverName-Timestamp. Create a new bucket if bucket is older than 1 hour and save individual files to the new
bucket. Otherwise, save files to existing bucket
Explanation:
Correct Answer A
Feedback
Avoid using sequential filenames such as timestamp-based filenames if you are uploading many files in parallel. Because files with sequential names are stored consecutively, they are likely to hit
the same backend server, meaning that throughput will be constrained. In order to achieve optimal throughput, you can add the hash of the sequence number as part of the filename to make it non-
sequential https://cloud.google.com/storage/docs/best-practices
Answer A (Correct) – since it uses “Name files with a random prefix pattern.”
Answer C , B, and D are incorrect since they use either “Name files with serverName-EventSequence” Or “Name files with serverName-Timestamp” which will cause the files unevenly distributed
in the backend. For example, a specific server may generate much more events than other, or at certain time period the system may generate much more events than other period…
Ask our Experts
View Queries
Did you like this Question?
Question 13Incorrect
Domain: Other
Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of
security errors being accidentally introduced. Which two actions can you take? (Select TWO)
Explanation:
A (Correct answer) – it’s generally considered as a good practice to leverage source code security analyzers integrated with your CI/CD pipeline.
D (Correct Answer) - Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline – it’s generally considered as a good practice to do Security
scanning of the application and infrastructure as part of the CI/CD pipeline.
B - Ensure you have stubs to unit test all interfaces between components – this is just one of the specific approaches to unit testing your code, not for security error detection.
C and E – The process is not required for an agile practice and it would slow down not speed up the release. Also, those do not specifically have added value for security error detection.
Question 14Correct
Domain: Other
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?
A. Configure Operations Suite (formerly Stackdriver) Monitoring for all Projects, and export to BigQuery.
B. Configure Operations Suite (formerly Stackdriver) Monitoring for all Projects with the default retention policies.
C. Configure Operations Suite (formerly Stackdriver) Monitoring for all Projects, and export to Google Cloud Storage.right
D. Grant the security team access to the logs in each Project.
Explanation:
Correct Answer C
Explanation
B and D can be quickly ruled out because none of them is solution for the requirements “retained for 5 years”
Between A and C, the different is where to store, BigQuery or Cloud Storage. Since the main concern is extended storing period, C (Correct Answer) is better choice, and the “retained for 5 years for
future analysis” further qualifies it, for example, using Archive storage class.
With regards of BigQuery, while it is also a low-cost storage, but the main purpose is for analysis. Also, logs stored in Cloud Storage is easy to transport to BigQuery or do query directly against the
files saved in Cloud Storage if and whenever needed.
Additional Resource
Operations Suite (formerly Stackdriver) Quotas and Limits for Monitoring https://cloud.google.com/monitoring/quotas
Question 15Correct
Domain: Other
You've created a Kubernetes engine cluster named "mycluster", which has a cluster pool named 'primary-node-pool'. You've realized that you need more total nodes within your cluster pool to meet
capacity demands from 10 to 20. What is the command to change the number of nodes in your pool?
Explanation:
Correct Answer B
Feedback:
B (Correct Answer). The command to resize an existing GKE node pool is:
gcloud container clusters resize NAME (--num-nodes=NUM_NODES | --size=NUM_NODES) [--async] [--node-pool=NODE_POOL] [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
Option D “gcloud container clusters update”. This updates cluster settings for an existing container cluster. You can use this command to specify --max-nodes --min-nodes for autoscaling purpose.
Also “--num-nodes” is a wrong flag option for this command. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update
Reference
gcloud container clusters resize - resizes an existing cluster for running containers https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Ask our Experts
View Queries
Did you like this Question?
Question 16Correct
Domain: Other
Based on MountKirk Games' technical requirements, what GCP services/infrastructure will they use to host their game backend platform?
Explanation:
Correct Answer: B
Since the case study clearly mentions that "They plan to deploy the game’s backend on Google Kubernetes Engine so they can scale rapidly", hence Google Kubernetes Engine can be used.
Case Study
GKE Documentation
Question 17Correct
Domain: Other
Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly.
* Services are deployed redundantly across multiple regions in the US and Europe.
A. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager
B. Google Cloud Storage, Google App Engine, Google Network Load Balancer
C. Google Container Registry, Google Kubernetes Engine, Google HTTP(s) Load Balancerright
D. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine
Explanation:
Correct Answer: C
Google Container Registry, Google Kubernetes Engine, Google HTTP(s) Load Balancer.
As per the requirements, Google Container Registry, and Google Kubernetes Engine meets the below requirements:
“Their architecture includes many small services that they want to be able to update and roll back quickly”;
* Services are deployed redundantly across multiple regions in the US and Europe.
All other answers provide an incomplete or incorrect solution and don't meet the requirements.
Which of the following are characteristics of GCP VPC subnets? Choose 2 answers.
A. Each subnet can span over multiple availability zones within a region to provide a high availability environment.right
B. Each subnet maps to a single Availability Zone.
C. CIDR block mask of /25 is the smallest range supported.
D. By default, all subnets can route between each other, whether they are private or public.right
Explanation:
A (Correct) - Each subnet can span over multiple Availability Zones to provide a high-availability environment.
Each VPC network consists of one or more useful IP range partitions called subnetworks or subnets. Each subnet is associated with a region. Networks can contain one or more subnets in any given
region. Subnets are regional resources.
D (Correct Answer) - By default, all subnets can route between each other, whether they are private or public.
Because subnets are regional resources, instances can have their network interfaces associated with any subnet in the same region that contains their zones.
Resources within a VPC network can communicate with one another using internal (private) IPv4 addresses, subject to applicable network firewall rules.
The default network includes a “default-allow-internal” rule, which permits instance-to-instance communication within the network.
Reference:
https://cloud.google.com/vpc/docs/vpc
Ask our Experts
View Queries
Did you like this Question?
Question 19Correct
Domain: Other
Which of TerramEarth's legacy enterprise processes in their existing data centers would experience significant change as a result of increased Google Cloud Platform adoption.
A. Opex (operational expenditures) and capex (capital expenditures) allocation, LAN changes, capacity planning.
B. Capacity planning, TCO calculations, Opex and Capex allocation.right
C. Capacity planning, utilization measurement, data center expansion.
D. Data Center expansion, TCO calculations, utilization measurement.
Explanation:
Correct Answer B
Feedback
A – Opex and capex allocation is part of answers; GCP adoption would not cause significant LAN changes.
B (Correct Answer) - Capacity planning, TCO calculations, Opex and Capex allocation - those are all in the scopes concerned.
From the case study, it can conclude that Management (CXO) all concern rapid provision of resources (infrastructure) for growing as well as cost management, such as Cost optimization in
Infrastructure, trade up front capital expenditures (Capex) for ongoing operating expenditures (Opex), and Total cost of ownership (TCO)
C - Capacity planning, utilization measurement, data center expansion – their data center would be shrinking instead of expanding if increasing Google Cloud Platform adoption.
D - Data Center expansion, TCO calculations, utilization measurement – “Data Center expansion” is wrong choice; “utilization measurement” is not necessary a significant change caused by GCP
adoption; Also, this answer is not as complete as Answer B
Additional Resource
Please read TerramEarth case study carefully to draw and extract your conclusions applicable to this questions and answers.
Question 20Correct
Domain: Other
You have a mission-critical database running on an instance on Google Compute Engine. You need to automate a database backup once per day to another disk. The database must remain fully
operational and functional and can have no downtime. How can you best perform an automated backup of the database with minimal downtime and minimal costs?
A. Use a cron job to schedule your application to backup the database to another persistent disk.right
B. Use a cron job to schedule a disk snapshot once per day.
C. Write the database to two different disk locations simultaneously, then schedule a snapshot of the secondary disk, which will allow the primary disk to continue running.
D. Use the automated snapshot service on Compute Engine to schedule a snapshot.
Explanation:
Correct answer A
To both minimize costs (don't want extra disks) and minimize downtime (cannot freeze database). Backing up just the database to another disk using a cron job is the preferred answer. It is also
possible to backup the database to a Cloud Storage bucket instead of a disk, which would be cheaper for the same amount of storage.
B and D all have some sort of Database downtime due to the snapshot.
Answer C would be hard to implement and use doubled resources. You’ll also lost the data consistency if you don’t freeze the primary database when you take snapshot on secondary database.
Overall, it’s not worthwhile for your efforts for this task when you have better solution like answer A.
Question 21Correct
Domain: Other
Once a month Terram Earth’s vehicles are serviced and the data is downloaded from the maintenance port. The data analysts would want to query the large amount of data collected from these
vehicles and analyze the overall condition of the vehicles. Terram Earth’s management is looking at a solution that is cost-effective and would scale for future requirements. Please select the right
choice based on the requirement.
A. Load the data from Cloud Storage to Bigquery and Run queries based on date using an appropriate filter on DATE for the data stored in Bigquery based on the date partitioned table.right
B. Store the data in Bigtable and run queries on it.
C. Load the data from Cloud Storage to Bigquery and run queries on Bigquery.
D. Run queries against the data stored in Cloud Spanner.
Explanation:
Correct Answer: A
Option A is correct. Running queries based on month-for-date partitioned tables is an efficient and cost-optimized solution.
Option B is incorrect. While Bigtable can provide low latency for a high volume of reads and writes but it isn’t a requirement here.
Option C is incorrect. One of the requirements is the solution to be cost-effective, and loading the data from Cloud Storage to Bigquery and running queries on Bigquery is a cost-optimized but not
a viable and performance-optimized solution.
Option D is incorrect. Cloud Spanner is a transactional database, the requirement suggests a data warehouse service.
Reference :
Question 22Correct
Domain: Other
Your company’s architecture is shown in the diagram. You want to automatically and simultaneously deploy new code to each Google Container Engine cluster. Which method should you use?
Explanation:
Correct Answer: D
Option A is incorrect - Since we have a managed service and a native solution in the option, it is preferred to pick that option.
Option B is incorrect. Federated mode allows for deployment in a federated way but does not do anything beyond that, you still have to have a tool like Jenkins to enable the "automated " part of
the question, and with Jenkins you can accomplish the goal without necessarily needing federation to be enabled.
Option C is incorrect. This may work in very simple examples, but as complexity grows this will become unmanageable.
Option D is correct. You can automate the deployment of your application to GKE by creating a trigger in Cloud Build. You can configure triggers to build and deploy images whenever you push
changes to your code.
https://cloud.google.com/build/docs/deploying-builds/deploy-gke
Question 23Correct
Domain: Other
Your company wants to reduce cost on infrequently accessed data by moving it to the cloud. The data will still be accessed approximately once a month to refresh historical charts. In addition, data
older than 5 years is no longer needed. How should you store and manage the data?
A. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle Management policy to delete data older than 5 years.
B. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 5 years.
C. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle Management policy to delete data older than 5 years.right
D. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 5 years.
Explanation:
Correct Answer C
Feedback
C (Correct Answer) - The access pattern fits Nearline storage class requirements and Nearline is a more cost-effective storage approach than Multi-Regional. The object lifecycle management policy
to delete data is correct versus changing the storage class to Coldline.
A and B – For the requirement: “accessed approximately once a month” A and B can be quickly eliminated due to the incorrect Multi-Regional storage class vs Nearline storage class
Ask our Experts
View Queries
Did you like this Question?
Question 24Correct
Domain: Other
You are transferring a very large number of small files to Google Cloud Storage from an on-premises location. You need to speed up the transfer of your files. Assuming a fast network connection,
what two actions can you do to help speed up the process?
Explanation:
Feedback
B - Use the -r option for large transfers. The -R and -r options are synonymous. Causes directories, buckets, and bucket subdirectories to be copied recursively.
C - Copy the files in bigger pieces at a time. No applicable to the question requirements
If you have a large number of files to transfer you might want to use the gsutil -m option, to perform a parallel (multi-threaded/multi-processing) copy:
A (Correct answer) - Compress and combine files before transferring. Compressing and combining smaller files info fewer larger files is also a best practice for speeding up transfer speeds because
it saves network bandwidth and space in Google Cloud Storage
Reference
Ask our Experts
View Queries
Did you like this Question?
Question 25Correct
Domain: Other
A financial company has recently moved from on-premise to Google Cloud Platform, they have started to use Bigquery for data analysis, while the performance of Bigquery has been good, but they
are concerned about controlling the cost for Bigquery usage. Select the relevant Bigquery best practices for controlling costs from the options given below. (Select 3)
Explanation:
Using SELECT * is the most expensive way to query data. When you use SELECT *, BigQuery does a full scan of every column in the table. Queries are billed according to the number of bytes
read. To estimate costs before running a query you could use The --dry_run flag in the CLI. If possible, partition your BigQuery tables by date. Partitioning your tables allows you to query relevant
subsets of data which improves performance and reduces costs.
Option C is an incorrect choice because applying a LIMIT clause to a query does not affect the amount of data that is read. It merely limits the results set to output. You are billed for reading all
bytes in the entire table as indicated by the query.
Option E is an incorrect choice because, Keeping large result sets in BigQuery storage has a cost. If you don't need permanent access to the results, use the default table expiration to automatically
delete the data for you.
Reference(s) :
https://cloud.google.com/bigquery/docs/best-practices-costs
Question 26Correct
Domain: Other
The security team has disabled external SSH access into production virtual machines in GCP. The operations team needs to remotely manage the VMs and other resources. What can they do?
A. Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.
B. Grant the operations team access to use Google Cloud Shell.
C. Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.
D. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.right
Explanation:
Correct Answer D
Option D - Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
The questions tell that the "blocking" happens on GCP, especially in production environments. That means that there are firewall rules preventing access from public IPs on port 22. Therefore, using
a VPN and configuring a firewall that allows TCP connections from RFC1918 on port 22, would work best. In this case, answer D is better.
Option B - SSH access will not allow access if Port 22 is blocked.
Options A and C are possible options that might require more setup than worthwhile for the needs.
Question 27Correct
Domain: Other
EHR Healthcare wants to create a single, globally accessible, high-performance SQL transactional database that provides EHRs to all customers with minimal latency and allows their management.
A. Cloud Spannerright
B. Cloud SQL with MySQL and global Read Replicas
C. Cloud SQL with SQL Server and global Read Replicas
D. Firestore replicated in multiple regions
Explanation:
Correct answer: A
Cloud Spanner is a fully managed, globally distributed, ACID-compliant relational database with read-write unlimited scale, strong consistency, and up to 99.999% availability. It handles replicas,
sharding, and transaction processing.
B is wrong because the DB must be globally in read-write mode. In addition Cloud SQL may be a too small solution for a such growing business
C is wrong also because SQL Server do not support global Read Replicas
D is wrong because Firestore is not a SQL transactional database
https://cloud.google.com/spanner/docs/replication
Ask our Experts
View Queries
Did you like this Question?
Question 28Correct
Domain: Other
Helicopter Racing League (HRL) wants to collect and process the information on stored video content and user behavior (number of accesses, connection times, site interactions, social
engagement). All the information collected must then be processed in live dashboards and stored in such a way that it can be subsequently analyzed and be a source of insights and forecasts.
Which of the following techniques are suitable for these solutions (pick 4 events - ingestion - processing and storage ) in the easiest and fastest way?
Explanation:
You need to get data as soon as it is produced and the best and simplest method for this is Pub/Sub, that can be called by Cloud Storage events and by any other GCP processing in a decoupled way.
Dataflow is the tool for both streaming and batch processing of data.
The aim of Video Intelligence APIs is to get many types of metadata from Video sources.
BigQuery is the main analytics tool for getting data insight for future ML processing.
A is wrong because Cloud Run is event driven but you have to code and it is not designed for application decoupling in a global way.
D is wrong because BigTable is a powerful noSQL Database, so its aim is to store not to process data.
F is wrong because AutoML Video is a ML classification tool and Video Intelligence APIs can detect a broader range of metadata
G is wrong because Cloud SQL is a regional SQL DB and not a serverless Analytics like BigQuery
https://cloud.google.com/architecture/building-a-streaming-video-analytics-pipeline
https://cloud.google.com/video-intelligence/docs/features
Question 29Correct
Domain: Other
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must
Explanation:
The best approach is by elimination: start from any requirements, for example, you may start elimination by the requirement not supported by repeatedly appeared components (e.g., GCE and GKE)
in the questions
If we start from “Be based on open-source technology for cloud portability”, we know that Container Engine (new name is K8s Engine, GKE for short) one of the unique features is “open-source
and cloud portability”. Now we have followings left:
At this point, if you have the experience or knowledge, you probably are able to make the right decision. If not then following the same approach, we can choose either requirement of LB or CICD.
For example, if we chose CICD, then the only answer is: Answer D. Google Kubernetes Engine, Jenkins, and Helm
At first glance it appears answer D does not meet “all of his requirements” since it seems misses the “Route network traffic to specific services based on URL”, an obvious feature for Cloud Load
Balancing.
If looking further, we know, unlike Compute Engine, the Kubernetes Engine offers integrated support for two types of cloud load balancing for a publicly accessible application. One of them is
HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions.
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
For your information: Helm is a package manager for Kubernetes templates. It allows for defining the Kubernetes templates required to run an application and then replace the application options
dynamically. It bundles all the templates in `tgz` packages called charts. https://helm.sh/
Note:
The first requirement in the question is "Open source technology for cloud portability ". The Google Kubernetes Engine (GKE) is the most preferred choice for this requirement for this.
Requirement 3 in the question is Continues delivery. Hence the correct choice will be Google Container Engine, Jenkins, and Helm.
Also the one more requirement here is "Route network traffic to specific services based on URL" which is the requirement make you think to select Cloud Load Balancing.
Kubernetes Engine offers integrated support for two types of cloud load balancing for a publicly accessible application.
Reference link:
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Ask our Experts
View Queries
Did you like this Question?
Question 30Correct
Domain: Other
Your company has decided to move to Google Cloud Platform. You are asked to explore the GCP environment. You have created a new project called “test-project-ch3” .The technical team would
want to know which services are enabled when you create a project in GCP. Please select the right command to list the enabled services also select the services that are enabled when you create a
project. Select any two.
A. ” gcloud services list “ is the command, the services that are enabled when you create a project are BigQuery API, Google Cloud APIs, Operations Suite Logging API, Operations Suite
Monitoring API, Datastore API, Service Management API, Service Usage API, Cloud SQL API, Cloud Storage JSON API & Cloud Storage API.right
B. "gcloud services list --enabled “ is the command, the services that are enabled when you create a project are BigQuery API, Google Cloud APIs, Operations Suite Logging API, Operations
Suite Monitoring API, Datastore API, Service Management API, Service Usage API, Cloud SQL API, Cloud Storage JSON API & Cloud Storage API.right
C. ”gcloud services list --available “ is the command, the services that are enabled when you create a project are BigQuery API, Compute Engine API, Operations Suite (formerly Stackdriver)
Logging API, Operations Suite (formerly Stackdriver) Monitoring API, Datastore API, Service Management API, Service Usage API, Cloud SQL API, Cloud Storage JSON API & Cloud
Storage API.
D. ”gcloud services list –upservices “ is the command , the services that are enabled when you create a project are BigQuery API, Compute Engine API, Operations Suite (formerly
Stackdriver) Logging API, Operations Suite (formerly Stackdriver) Monitoring API, Datastore API, Service Management API, Service Usage API, Cloud SQL API ,Cloud Storage JSON API
& Cloud Storage API.
Explanation:
To list the services the current project has enabled for consumption, run: gcloud services list –enabled also --enabled is the default when you don’t use any flag. The services that are enabled when
you create a project are BigQuery API, Google Cloud APIs, Operations Suite (formerly Stackdriver) Logging API, Operations Suite (formerly Stackdriver) Monitoring API, Datastore API, Service
Management API, Service Usage API, Cloud SQL API, Cloud Storage JSON API & Cloud Storage API.
Option C is incorrect because, it lists the services the current project can enable for consumption, run: .”gcloud services list --available “. Also Compute Engine API isn’t enabled when you create a
project, once you click on Compute Engine in the console, it gets enabled.
C:\Users\chetan\Desktop\gcp-screen2.png
C:\Users\chetan\Desktop\gcp-screen3.png
References:
https://cloud.google.com/sdk/gcloud/reference/services/list
Question 31Correct
Domain: Other
Your customer is moving an existing corporate application from an on-premises data center to Google Cloud Platform . The business owner requires minimal user disruption. There are strict security
team requirements for storing passwords. What authentication strategy should they use?
Explanation:
Correct Answer D
Feedback
D is Correct answer - Federate authentication via SAML 2.0 to the existing Identity Provider. This meets both “minimal user disruption” and “strict security team requirements for storing
passwords”
User's passwords are stored on-premise, authentication happens on premise, there is no user disruption, on successful authentication, access token is shared to access application or GCP services.
Option A - Use G Suite Password Sync to replicate passwords into Google - This is a violation against “strict security team requirements for storing passwords”
https://support.google.com/a/answer/2611859?hl=en
Option B - Ask users to set their Google password to match their corporate password – this violate “minimal user disruption” and “strict security team requirements for storing passwords”
Option C - Provision users in Google using the Google Cloud Directory Sync tool. With google cloud directory sync, only the SHA-1 and MD5 unsalted passwords gets synced from source. Plus
this may break the strict password requirement. Your credential details are now stored at 2 places.
Question 32Correct
Domain: Other
When creating firewall rules, what forms of segmentation can narrow which resources the rule is applied to? (Choose all that apply)
Explanation:
Explanation
You can restrict network access on the firewall by network tags and network ranges/subnets.
Here is the console screenshot showing the options when you create firewall rules
Ask our Experts
View Queries
Did you like this Question?
Question 33Correct
Domain: Other
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's web hosting platform. Improvement to the QA and Test processes accomplished an
80% reduction. Which additional two approaches can you take to further reduce the rollbacks? (Choose two)
Explanation:
A (Correct Answer) - The blue-green model allows for extensive testing of the application in the green environment before sending traffic to it. Typically, the two environments are identical
otherwise which gives the highest level of testing assurance.
B (Correct Answer) - Microservices allows for smaller, more incremental rollouts of updates (each microservice can be updated individually) which will reduce the likelihood of an error in each
rollout.
C is incorrect - Would remove a well proven step from the general release strategy, a canary release platform is not a replacement for QA, it should be additive.
D is incorrect - Doesn’t really help the rollout strategy, there is no inherent property of a relational database that makes it more subject to failed releases than any other type of data storage.
E is incorrect - Doesn’t really help either since NoSQL databases do not offer anything over relational databases that would help with release quality.
Question 34Correct
Domain: Other
Helicopter Racing League (HRL) wants to migrate their existing cloud service to the GCP platform with solutions that allow them to use and analyze video of the races both in real-time and
recorded for broadcasting, on-demand archive, forecasts, and deeper insights.
During a race filming, how can you manage both live playbacks of the video and live annotations so that they are immediately accessible to users without coding (pick 2)?
Explanation:
Correct Answers: B and D
D is correct because HTTP Live Streaming is a technology from Apple for sending live and on‐demand audio and video to a broad range of devices.
It supports both live broadcasts and prerecorded content, from storage and CDN.
B is correct because Video Intelligence API Streaming API is capable of analyzing and getting important metadata from live media, using the AIStreamer ingestion library.
A is wrong because HTTP protocol alone cannot manage live streaming video.
C is wrong because Dataflow manages streaming data pipelines but cannot derive metadata from binary data, unless you use customized code.
E is wrong because Pub/Sub could ingest metadata, but not analyze and getting labels and other info from videos
https://cloud.google.com/video-intelligence/docs/streaming/live-streaming-overview https://cloud.google.com/blog/products/data-analytics/streaming-video-using-cloud-data-platform
https://developer.apple.com/streaming/
Ask our Experts
View Queries
Did you like this Question?
Question 35Correct
Domain: Other
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also
retaining that data as a long-term disaster recovery backup. Which two steps should they take? Choose 2 answers
The order should be Upload log files into Google Cloud Storage and then Load logs into Google BigQuery.
E (Correct answer) - Upload log files into Google Cloud Storage Cloud Storage is best solution for Long-term disaster recovery backup. You can do SQL query direct against data in Cloud Storage.
It also meets the low risk requirement to prevent potential accidental data loss and modification.
A (Correct answer) - Load logs into Google BigQuery – BigQuery is most suitable solution for doing analytics against large amount of data; You can do SQL query direct against data in Cloud
Storage.
B - Import logs into Google Operations Suite (formerly Stackdriver) - Operations Suite (formerly Stackdriver) is not a suitable solution for Long-term disaster recovery backup
C - Insert logs into Google Cloud Bigtable: BigTable is not a suitable solution for Long-term disaster recovery backup
D - Load logs into Google Cloud SQL - Cloud SQL is relation database designed for transactional CRUD OLTP processing suitable for data less than 10 TB.
Note:
-------------
BigQuery is Google’s Cloud-based data warehousing solution. It targets data in big picture and can query huge volume of data in a short time. As the data is stored in columnar data format,
it is much faster in scanning large amounts of data compared with BigTable.
BigQuery allows to scale to petabyte and is great enterprise data warehouse for analytics. BigQuery is serverless
BigTable is designed in NoSQL architecture, but can still use row-based data format. With data read/write under 10 milliseconds, it is good for applications that have frequent data ingestion. It can
be scaleable to hundreds of petabytes and handle millions of operations per second.
-----------------------
Question 36Correct
Domain: Other
EHR Healthcare needs to set up a general DR policy for all its relation Databases, distributed in all its Data Centers on-premises.
The activity is preparatory to the migration to the Cloud, which includes managed services.
EHR Healthcare wants to migrate data into managed services in the future without major impact on applications, which will all need to be containerized.
Later it will adopt a global DB solution.
The DR will be the first step towards migration
The requirements are RPO and RTO in less than 1 hour.
Which of the following solutions do you think are the best (Select TWO)?
A. Create daily snapshots of the Database and transfer them to Cloud Storage
B. Create MySQL external replica promotion migration into Cloud SQLright
C. Create a SQL Server external replica promotion migration into Cloud SQL
D. Save Backup folders of the SQL Server Databases to Cloud Storage with gsutil rsync with hourly update procs to Cloud SQLright
E. Save Backup folders of the MySQL Databases to Cloud Storage with gsutil rsync with daily update procs to Cloud SQL
Explanation:
For MySQL Database it is possible to create a Cloud SQL read replica of a local DB. The Cloud SQL read replica is asynchronously synchronized and may be promoted to master DB. An easy and
elegant solution.
With SQL Server it is not possible; Cloud SQL read replicas for SQL Server are not supported so the traditional way (incremental backups and transaction logs) have to be followed
https://cloud.google.com/architecture/dr-scenarios-for-data
https://cloud.google.com/architecture/disaster-recovery-for-microsoft-sql-server
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets
https://cloud.google.com/architecture/migrating-mysql-to-cloudsql-concept#external_replica_promotion_migration
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the
future. What should you do? Select one.
Explanation:
Correct Answer B
Feedback
A – Deploy changes to a small subset of users before rolling out to production. This is the practice in Canary deployment. The bug slip into production may be caused by the discrepancy between
test/staging and production environments or testing data. With Canary deployment or Canary test, you have the ability to test code with live data at any time, you increase the chance discovering the
bug earlier and reduced the risk bring the bug into production with minimums impact and down time by rolling back quickly. But the canary deployment will not able to test the performance bugs in
the environment.
B (Correct Answer) – Increase the load on your test and staging environments. Increase the load in your test and staging environment will help to discover the bugs revolving around the
performance issue.
C and D – Deploy smaller or fewer changes to production. Although those are generally good agile practices for cloud native microservice, they don’t address the issues to adjust your test and
deployment procedures to discover the bugs before in production. The Bug can still slip into production no matter how small how often you test the changes in same environment and same set of test
data in same procedures.
Question 38Incorrect
Domain: Other
Helicopter Racing League (HRL) wants to expand its use of managed AI and ML services to facilitate race predictions. Currently, race predictions are performed using TensorFlow running on VMs
in the current public cloud provider.
Which GCP Services could host HRL TensorFlow models in a fully managed way (pick 2)?
Explanation:
Google Cloud has many services for training and hosting TensorFlow models in the cloud in a managed way:
AI Platform (now Vertex AI), that can train, host and use your ML model to make predictions at scale. You may choose the image of the instances, fully managed by GCP. Vertex AI is an
integrated suite of products that comprends AI Platform functions together with pre-trained, AutoML and custom tooling
BigQuery ML can make predictions with imported TensorFlow models, in addition to many other kinds of models
A is wrong because AutoML Vision Edge helps to deploy models run on local devices such as smartphones, IoT devices. In our case Cloud Services are required.
B is wrong because DialogFlow is for Natural Language Dialogs, not Tensorflow applied to videos
D is wrong because Kubeflow is an open source library and tools for machine learning (ML) workflow deployments on Kubernetes
F is wrong because TensorFlow Enterprise provides a scalable and managed TensorFlow development environment, not a hosting service.
https://cloud.google.com/ai-platform/docs/technical-overview
https://cloud.google.com/vertex-ai
https://cloud.google.com/bigquery-ml/docs/making-predictions-with-imported-tensorflow-models
Question 39Correct
Domain: Other
What is the best practice for separating responsibilities and access for production and development environments?
A. Separate project for each environment, each team only has access to their project.right
B. Separate project for each environment, both teams have access to both projects.
C. Both environments use the same project, but different VPC's.
D. Both environments use the same project, just note which resources are in use by which group.
Explanation:
Correct Answer A
Explanation
A (Correct answer) - Separate project for each environment, each team only has access to their project.
For least privilege and separation of duties, the best practice is to separate both environments into different projects, development or production team gets their own accounts, and each team is
assigned to only their projects.
· You should not use same account for both Development and production environments regardless how do you create projects inside that account for different environments. You should use
different account for each environment which associated with different group of users. You should use project to isolate user access to resource not to manage users.
· Using a shared VPC allows each team to individually manage their own application resources, while enabling each application to communicate between each other securely over RFC1918
address space. So VPC's isolate resources but not user/service accounts.
Answer B is the scenario that use same account for both development and production environments attempting to isolate user access with different projects
Answer C is the scenario that use same account for both development and production environments with same project attempting to isolate user access with network separation.
Answer D is the scenario that use same account for both development and production environments with same project attempting to isolate user access with user group at resource level.
You may grant roles to group of users to set policies at organization level, project level, or (in some cases) the resource (e.g., existing Cloud Storage and BigQuery ACL systems as well as and
Pub/Sub topics) level.
The best practice: Set policies at the Organization level and at the Project level rather than at the resource level. This is because as new resources get added, you may want them to automatically
inherit policies from their parent resource. For example, as new Virtual Machines gets added to the project through auto scaling, they automatically inherit the policy on the project.
https://cloud.google.com/iam/docs/resource-hierarchy-access-control#best_practices
Additional Resources:
To recap: IAM lets you control who (users) has what access (roles) to which resources by setting IAM policies. IAM policies grant specific role(s) to a user giving the user certain permissions.
https://cloud.google.com/resource-manager/docs/access-control-org
https://cloud.google.com/iam/docs/resource-hierarchy-access-control#background
Ask our Experts
View Queries
Did you like this Question?
Question 40Correct
Domain: Other
Your developer currently maintains a J2EE application. What two considerations should he consider for moving his application to the cloud to meet demand and minimize overhead? (Choose two)
Explanation:
Explanation
J2EE is Java, which can run on App Engine. He can also configure his application to run on a managed instance group for scaling, as long as he configures a data storage backend for the group as
well.
Question 41Correct
Domain: Other
You work in a small company where everyone should be able to view all resources of a specific project. You want to grant them access following Google’s recommended practices. What should you
do?
A. Create a script that uses "gcloud projects add-iam-policy-binding" for all users’ email addresses and the Project Viewer role.
B. A. Create a script that uses "gcloud iam roles create" for all users’ email addresses and the Project Viewer role.
C. Create a new Google Group and add all users to the group. Use "gcloud projects add-iam-policy-binding" with the Project Viewer role and Group email address.right
D. Create a new Google Group and add all members to the group. Use "gcloud iam roles create" with the Project Viewer role and Group email address.
Explanation:
Correct answer C
Feedback
B is not correct because this command is to create roles, not to assign them.
D is not correct because this command is to create roles, not to assign them.
Reference
Ask our Experts
View Queries
Did you like this Question?
Question 42Correct
Domain: Other
One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes to the application data. How can you design your logging system to
verify the authenticity of your logs?
A. Create a JSON dump of each log entry and store it in Google Cloud Storage.
B. Write the log concurrently in the cloud and on premises.
C. Digitally sign each timestamp and log entry and store the signature.right
D. Use an SQL database and limit who can modify the log table.
Explanation:
Correct Answer C
Feedback
C (Correct answer) - Digitally sign each timestamp and log entry and store the signature.
Answer A, B, and D don’t have any added value to verify the authenticity of your logs. Besides, Logs are mostly suitable for exporting to Cloud storage, BigQuery, and PubSub. SQL database is not
the best way to be exported to nor store log data.
Simplified Explanation
To verify the authenticity of your logs if they are tampered with or forged, you can use a certain algorithm to generate digest by hashing each timestamp or log entry and then digitally sign the digest
with a private key to generate a signature. Anybody with your public key can verify that signature to confirm that it was made with your private key and they can tell if the timestamp or log entry
was modified.
You can put the signature files into a folder separate from the log files. This separation enables you to enforce granular security policies.
Ref URL: https://cloud.google.com/logging/docs/reference/tools/gcloud-logging
Ask our Experts
View Queries
Did you like this Question?
Question 43Correct
Domain: Other
What is the command for creating a storage bucket that has once per month access and is named 'archive_bucket'?
Explanation:
Correct answer C
mb is to make the bucket. Nearline buckets are for once per month access. Coldline buckets require only accessing once per 90 days and would incur additional charges for greater access.
Further Explanation
Synopsis
If you don't specify a -c option, the bucket is created with the default storage class Standard Storage, which is equivalent to Multi-Regional Storage or Regional Storage, depending on whether the
bucket was created in a multi-regional location or regional location, respectively.
If you don't specify a -l option, the bucket is created in the default location (US). -l option can be any multi-regional or regional location.
Reference
Question 44Correct
Domain: Other
Explanation:
Correct Answer: C
System Pods have to run with non-preemptible VMs, otherwise, it would be dangerous when the node gets removed.
So you have to avoid having only node pools with GPU preemptible VMs. In this case, the taint nvidia.com/gpu=present:NoSchedule should be removed.
And it is OK to have at least a node pool with non-preemptible VMs, in addition to GPU preemptible VMs.
Option A is wrong because it is OK to have a node pool with preemptible VMs when you have at least a node pool with non-preemptible VMs.
Option B is wrong because it is OK to use GPU preemptible VMs.
Option D is wrong because it would be dangerous when System Pods will get removed.
https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms#gpu_preemptible_node_taints
Question 45Correct
Domain: Other
To set up a virtual private network between your office network and Google Cloud Platform and have the routes automatically updated when the network topology changes, what is the minimal
number of each type of component you need to implement?
Explanation:
Concert answer B
Feedback
The minimal number of each type of component you need to implement Dynamic routing:
1 Cloud VPN Gateway (Show as VPN in GCP network on left), 1 Peer Gateway (Show as VPN Gateway with BGP in peer network on right), and 1 Cloud Router, displayed in the diagram
Question 46Correct
Domain: Other
You need to deploy an update to an application in Google App Engine. The update is risky, but it can only be tested in a live environment. What is the best way to introduce the update to minimize
risk?
A. Deploy a new version of the application but use traffic splitting to only direct a small number of users to the new version.right
B. Deploy the application temporarily and be prepared to pull it back if needed.
C. Warn users that a new app version may have issues and provide a way to contact you if there are problems.
D. Create a new project with the new app version, then redirect users to the new version.
Explanation:
Correct Answer A
Explanation
A (Correct Answer) - Deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small
amount of traffic to the new version and can also be quickly reverted without application downtime
B - Deploy the application temporarily and be prepared to pull it back if needed. Deploying the application new version as default requires moving all traffic to the new version. This could impact all
users and disable the service during the new version’s live time.
C - Warn users that a new app version may have issues and provide a way to contact you if there are problems. We won’t recommend this practice.
D - Create a new project with the new app version, then redirect users to the new version.
Deploying a second project requires data synchronization and having an external traffic splitting solution to direct traffic to the new application. While this is possible, with Google App Engine,
these manual steps are not required.
Question 47Correct
Domain: Other
How are subnetworks (VPC Networks) different than the legacy networks?
Explanation:
Correct Answer - B
Legacy networking
Legacy networks have a single RFC 1918 range, which you specify when you create the network. The network is global in scope and spans all cloud regions.
In a legacy network, instance IP addresses are not grouped by region or zone. One IP address can appear in one region, and the following IP address can be in a different region. Any given range of
IPs can be spread across all regions, and the IP addresses of instances created within a region are not necessarily contiguous.
Each VPC network consists of one or more useful IP range partitions called subnetworks or subnets. Each subnet is associated with a region. Networks can contain one or more subnets in any given
region. Subnets are regional resources.
Each subnet must have a primary address range, which is a valid RFC 1918 CIDR block.
Subnets in the same network must use unique IP ranges. Subnets in different networks, even in the same project, can re-use the same IP address ranges.
subnet3 is defined as 10.2.0.0/16, in the us-east1 region. One VM instance in the us-east1-a zone and a second instance in the us-east1-b zone, each receiving an IP addresses from its available
range.
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks. It is still possible to create legacy networks through the gcloud command-line tool
and the REST API. It is not possible to create legacy networks using the Google Cloud Platform Console.
Reference resources
Google Cloud Platform (GCP) legacy networking vs. VPC subnet https://cloud.google.com/vpc/docs/legacy
Ask our Experts
View Queries
Did you like this Question?
Question 48Incorrect
Domain: Other
EHR Healthcare manages identities with Microsoft Active Directory, which is also integrated into many applications.
When deploying the migrated workload, they need the privileges granted by Google Cloud IAM and the identities and permissions from on-premise environments in a seamless way. They must
create a simple and workable solution to minimize the transformation load.
A. Configure Cloud Identity to use Active Directory as LDAP and authoritative source through federation with Google Cloud Directory Sync and Active Directory Federation Services (AD
FS)right
B. Configure Cloud Identity to use Active Directory as LDAP and authoritative source through federation with Google Cloud with Azure Active Directorywrong
C. Use Cloud Identity and replicate changes to Active Directory with SSOright
D. Use Cloud Identity and replicate changes to an LDAP Server compatible with Azure Active Directory
Explanation:
EHR Healthcare uses Active Directory on-premises but not Active Directory in Azure Cloud (Azure Active Directory).
B and D are wrong because EHR Healthcare doesn't use Azure Active Directory.
https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction
Ask our Experts
View Queries
Did you like this Question?
Question 49Correct
Domain: Other
You have a few media files over 5GB each that you need to migrate to Google Cloud Storage. The files are in your on-premises data center. What migration method can you use to help speed up the
transfer process?
Explanation:
Correct Answer: B
Option A is incorrect– Use multi-threaded uploads using the -m option.
If you have a large number of files to transfer you might want to use the gsutil -m option, to perform a parallel (multi-threaded/multi-processing) copy:
Option B (Correct answer) - Parallel uploads are for breaking up larger files into pieces for faster uploads.
gsutil can automatically use object composition to perform uploads in parallel for large, local files being uploaded to Google Cloud Storage. If enabled (see below), a large file will be split into
component pieces that are uploaded in parallel and then composed in the cloud (and the temporary components finally deleted).
Option C is incorrect. Use the Cloud Transfer Service to transfer. Storage Transfer Service is limited to AWS S3, Google Cloud Storage, On-premise and HTTP/HTTPS locations.
Option D is incorrect Start a recursive upload: The -R and -r options are synonymous. Causes directories, buckets, and bucket subdirectories to be copied recursively.
Reference:
Question 50Correct
Domain: Other
Your company is developing a next generation pet collar that collects biometric information to assist potential millions of families with promoting healthy lifestyles for their pets. Each collar will
push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and
veterinarians via a web portal. Management has tasked you to architect the collection platform ensuring the following requirements are met.
Provide the ability for real-time analytics of the inbound biometric data
Ensure processing of the biometric data is highly durable, elastic and parallel
The results of the analytic processing should be persisted for data mining
Which architecture outlined below win meet the initial requirements for the platform?
A. Utilize Cloud Storage to collect the inbound sensor data, analyze data with Dataproc and save the results to BigQuery.
B. Utilize Cloud Pub/Sub to collect the inbound sensor data, analyze the data with DataFlow and save the results to BigQuery.right
C. Utilize Cloud Pub/Sub to collect the inbound sensor data, analyze the data with DataFlow and save the results to Cloud SQL.
D. Utilize Cloud Pub/Sub to collect the inbound sensor data, analyze the data with DataFlow and save the results to BigTable.
Explanation:
Correct Answer B
Feedback
Cloud Pub/Sub is a simple, reliable, scalable foundation for stream analytics and event-driven computing systems. As part of Google Cloud’s stream analytics solution, the service ingests event
streams and delivers them to Cloud Dataflow for processing and BigQuery for analysis as a data warehousing solution. Relying on the Cloud Pub/Sub service for delivery of event data frees you to
focus on transforming your business and data systems with applications such as:
· check Fast reporting, targeting and optimization in advertising and media
· check Processing device data for healthcare, manufacturing, oil and gas, and logistics
Other solutions may work one way or other but only the combination of theses 3 components integrate well in data ingestion, collection, and real-time analysis, and data mining in a highly durable,
elastic, and parallel manner.
A – Cloud storage is not suitable for this kind of real-time streaming data collection; Dataproc is GCP’s BigData Hadoop/Spark that can do ETL and analysis, but DataFlow provide simple unified
programming model for ETL and analysis in bot Realtime and batch and integrate well with PubSub.
C – Cloud SQL is mainly for OLTP (Transactional, CRUD) not for OLAP (On-line Analytical Processing, Data Warehouse). It does not have the scalability, elasticity, and parallel to absorb this
amount of Data in real time. Instead BigQuery integrate well with DataFlow and can absorb both steaming and batch data from it.
D – Bigtable is one of the possible Data sink for DataFlow and have the capability to absorb this amount of real time data but it lacks the Data mining features like BigQuery.
Further Explanation
Pub/Sub is kind of ‘shock absorber', allowing asynchronous messaging between large numbers of devices. Cloud Dataflow acts as your data processing pipeline for ETL functions on both streaming
and batch data. BigQuery is a data warehouse, able to run analysis on petabytes of data using SQL queries.
Below is a reference architect Google recommending for similar scenario in Real-time streaming data collection and analysis https://cloud.google.com/solutions/mobile/mobile-gaming-analysis-
telemetry
Data Transformation with Cloud Dataflow - Dataflow acts as your data processing pipeline for ETL functions on both streaming and batch data.
Ask our Experts
View Queries
Did you like this Question?