Professional Machine Learning Engineer
270
Google Professional Machine Learning Engineer
A: Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
B: Use the original audio sampling rate, and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
C: Upsample the audio recordings to 16 kHz. and transcribe the audio by using the Speech-to-Text API with synchronous recognition.
D: Upsample the audio recordings to 16 kHz. and transcribe the audio by using the Speech-to-Text API with asynchronous recognition.
A: Import the TensorFlow model by using the create model statement in BigQuery ML. Apply the historical data to the TensorFlow model.
B: Export the historical data to Cloud Storage in Avro format. Configure a Vertex Al batch prediction job to generate predictions for the exported data.
C: Export the historical data to Cloud Storage in CSV format. Configure a Vertex Al batch prediction job to generate predictions for the exported data.
D: Configure and deploy a Vertex Al endpoint. Use the endpoint to get predictions from the historical data inBigQuery.
A: Create a Vertex Al pipeline that runs different model training jobs in parallel.
B: Train an AutoML image classification model.
C: Create a custom training job that uses the Vertex Al Vizier SDK for parameter optimization.
D: Create a Vertex Al hyperparameter tuning job.
A: 1. Create an instance of the CustomTrainingJob class with the Vertex AI SDK to train your model. 2. Using the Notebooks API, create a scheduled execution to run the training code weekly.
B: 1. Create an instance of the CustomJob class with the Vertex AI SDK to train your model. 2. Use the Metadata API to register your model as a model artifact. 3. Using the Notebooks API, create a scheduled execution to run the training code weekly.
C: 1. Create a managed pipeline in Vertex Al Pipelines to train your model by using a Vertex Al CustomTrainingJoOp component. 2. Use the ModelUploadOp component to upload your model to Vertex Al Model Registry. 3. Use Cloud Scheduler and Cloud Functions to run the Vertex Al pipeline weekly.
D: 1. Create a managed pipeline in Vertex Al Pipelines to train your model using a Vertex Al HyperParameterTuningJobRunOp component. 2. Use the ModelUploadOp component to upload your model to Vertex Al Model Registry. 3. Use Cloud Scheduler and Cloud Functions to run the Vertex Al pipeline weekly.