Associate Cloud Engineer
322
Google Associate Cloud Engineer
A: Develop SQL queries by using Gemini for Google Cloud.
B: Enable Log Analytics for the log bucket and create a linked dataset in BigQuery.
C: Create a schema for the storage bucket and run SQL queries for the data in the bucket.
D: Export logs to a storage bucket and create an external view in BigQuery.
A: Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource you expect to use. Use similar size instances for the web server, and use your current on-premises machines as a comparison for Cloud SQL.
B: Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller scale. Check the billing information, and calculate the estimated costs based on the real load your system usually handles.
C: Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your web application with as much detail as possible.
D: Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate sheet, import the current Google Cloud prices and use these prices for the calculations within formulas.
A: Configure the Horizontal Pod Autoscaler for availability, and configure the cluster autoscaler for suggestions.
B: Configure the Horizontal Pod Autoscaler for availability, and configure the Vertical Pod Autoscaler recommendations for suggestions.
C: Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Cluster autoscaler for suggestions.
D: Configure the Vertical Pod Autoscaler recommendations for availability, and configure the Horizontal Pod Autoscaler for suggestions.
A: Run a Kubernetes job to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.
B: Create an App Engine standard environment triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.
C: Run a Python script by using a Linux cron job in Compute Engine to scan the bucket regularly for incoming files, and call the Speech-to-Text API for each unprocessed file.
D: Create a Cloud Function triggered by Cloud Storage bucket events to submit the file URI to the Google Speech-to-Text API.