- Home
- Vertex AI
- Documentation
Vertex AI is a machine learning (ML) platform that lets you trainand deploy ML models and AI applications, and customize large language models(LLMs) for use in your AI-powered applications. Vertex AI combines dataengineering, data science, and ML engineering workflows, enabling yourteams to collaborate using a common toolset and scale your applications usingthe benefits of Google Cloud.
Vertex AI provides several options for model trainingand deployment:
AutoML lets you train tabular, image, text, or video datawithout writing code or preparing data splits.
Custom training gives you complete control over the trainingprocess, including using your preferred ML framework, writing your owntraining code, and choosing hyperparameter tuning options.
Model Garden lets you discover, test, customize, and deployVertex AI and select open-source (OSS) models and assets.
Generative AI gives you access to Google's large generative AImodels for multiple modalities (text, code, images, speech). You can tuneGoogle's LLMs to meet your needs, and then deploy themfor use in your AI-powered applications.
After you deploy your models, use Vertex AI's end-to-end MLOps tools toautomate and scale projects throughout the ML lifecycle.These MLOps tools are run on fully-managed infrastructure that you can customizebased on your performance and budget needs.
You can use the Vertex AI SDK for Python to run the entire machinelearning workflow in Vertex AI Workbench, a Jupyternotebook-based development environment. You can collaborate with a teamto develop your model in Colab Enterprise,a version of Colaboratory that is integrated withVertex AI. Other available interfacesinclude the Google Cloud Console, the gcloud command line tool, clientlibraries, and Terraform (limited support).
Vertex AI and the machine learning (ML) workflow
This section provides an overview of the machine learning workflow and how youcan use Vertex AI to build and deploy your models.
Data preparation: After extracting and cleaning your dataset, performexploratory data analysis (EDA) to understand the data schema andcharacteristics that are expected by the ML model. Apply data transformationsand feature engineering to the model, and split the data into training,validation, and test sets.
Explore and visualize data using Vertex AI Workbenchnotebooks. Vertex AI Workbench integrates with Cloud Storage andBigQuery to help you access and process your data faster.
For large datasets, use Dataproc Serverless Spark from aVertex AI Workbench notebook to run Spark workloads without having tomanage your own Dataproc clusters.
Model training: Choose a training method to train a model and tune it forperformance.
To train a model without writing code, see the AutoMLoverview. AutoML supports tabular, image, text, andvideo data.
To write your own training code and train custom models using your preferredML framework, see the Custom training overview.
Optimize hyperparameters for custom-trained models using custom tuningjobs.
Vertex AI Vizier tunes hyperparameters for you in complex machinelearning (ML) models.
Use Vertex AI Experiments to train your model usingdifferent ML techniques and compare the results.
Register your trained models in theVertex AI Model Registry for versioning and hand-off toproduction. Vertex AI Model Registry integrates with validation anddeployment features such as model evaluation and endpoints.
Model evaluation and iteration: Evaluate your trained model, makeadjustments to your data based on evaluation metrics, and iterate on yourmodel.
- Use model evaluation metrics, such as precision and recall, toevaluate and compare the performance of your models. Create evaluationsthrough Vertex AI Model Registry, or include evaluations in yourVertex AI Pipelines workflow.
Model serving: Deploy your model to production and get predictions.
Deploy your custom-trained model using prebuilt orcustom containers to get real-time onlinepredictions (sometimes called HTTP prediction).
Get asynchronous batch predictions, which don't requiredeployment to endpoints.
Optimized TensorFlow runtime lets you serve TensorFlowmodels at a lower cost and with lower latency than open source basedprebuilt TensorFlow Serving containers.
For online serving cases with tabular models, useVertex AI Feature Store to serve features from acentral repository and monitor feature health.
Vertex Explainable AI helps you understand how each feature contributes tomodel prediction (feature attribution) and find mislabeled data from thetraining dataset (example-based explanation).
Deploy and get online predictions for models trained withBigQuery ML.
Model monitoring: Monitor the performance of your deployed model. Useincoming prediction data to retrain your model for improved performance.
- Vertex AI Model Monitoring monitors models fortraining-serving skew and prediction drift and sends you alerts when theincoming prediction data skews too far from the training baseline.
What's next
Learn about Vertex AI's MLOps features.
Learn about interfaces that you can use to interact withVertex AI.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-05-23 UTC.
[{ "type": "thumb-down", "id": "hardToUnderstand", "label":"Hard to understand" },{ "type": "thumb-down", "id": "incorrectInformationOrSampleCode", "label":"Incorrect information or sample code" },{ "type": "thumb-down", "id": "missingTheInformationSamplesINeed", "label":"Missing the information/samples I need" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }] [{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]