Professional Cloud Architect
120 Minutes
268
Google Cloud Architect Professional
A: 1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
B: . Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need.
C: 1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.
D: 1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project .3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check.
A: Enable Private Google Access on sub-b
B: Configure Cloud NAT and select sub b m the NAT mapping section
C: Configure a bastion host instance in sub a to connect to instances in sub-b
D: Enable Identity Aware Proxy for TCP forwarding for instances in sub-b
A: Add the node group name as a network tag when creating Compute Engine instances in order to host each workload on the correct node group.
B: Add the node name as a network tag when creating Compute Engine instances in order to host each workload on the correct node.
C: Use node affinity labels based on the node group name when creating Compute Engine instances in order to host each workload on the correct node group
D: Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node.
A: A1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
B: 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level to create a lock.
C: C1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
D: D1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.