{"id":4427,"date":"2025-08-09T12:42:14","date_gmt":"2025-08-09T12:42:14","guid":{"rendered":"https:\/\/uplatz.com\/blog\/?p=4427"},"modified":"2025-08-09T12:42:14","modified_gmt":"2025-08-09T12:42:14","slug":"google-vertex-ai-pocket-book","status":"publish","type":"post","link":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/","title":{"rendered":"Google Vertex AI Pocket Book"},"content":{"rendered":"<p><!-- Vertex AI Pocket Book \u2014 Uplatz (50 Cards, Wide Layout, Readable Code, Scoped Styles) --><\/p>\n<div style=\"margin:16px 0;\">\n<style>\n    .wp-vertexai-pb { font-family: Arial, sans-serif; max-width: 1320px; margin:0 auto; }\n    .wp-vertexai-pb .heading{\n      background: linear-gradient(135deg, #e3f2fd, #e0f7fa); \/* light blue -> light teal *\/\n      color:#0f172a; padding:22px 24px; border-radius:14px;\n      text-align:center; margin-bottom:18px; box-shadow:0 8px 20px rgba(0,0,0,.08);\n      border:1px solid #cbd5e1;\n    }\n    .wp-vertexai-pb .heading h2{ margin:0; font-size:2.1rem; letter-spacing:.2px; }\n    .wp-vertexai-pb .heading p{ margin:6px 0 0; font-size:1.02rem; opacity:.9; }<\/p>\n<p>    \/* Wide, dense grid *\/\n    .wp-vertexai-pb .grid{\n      display:grid; gap:14px;\n      grid-template-columns: repeat(auto-fill, minmax(400px, 1fr));\n    }\n    @media (min-width:1200px){\n      .wp-vertexai-pb .grid{ grid-template-columns: repeat(3, 1fr); }\n    }<\/p>\n<p>    .wp-vertexai-pb .section-title{\n      grid-column:1\/-1; background:#f8fafc; border-left:8px solid #1a73e8; \/* Google blue *\/\n      padding:12px 16px; border-radius:10px; font-weight:700; color:#0f172a; font-size:1.08rem;\n      box-shadow:0 2px 8px rgba(0,0,0,.05); border:1px solid #e2e8f0;\n    }\n    .wp-vertexai-pb .card{\n      background:#ffffff; border-left:6px solid #1a73e8;\n      padding:18px; border-radius:12px;\n      box-shadow:0 6px 14px rgba(0,0,0,.06);\n      transition:transform .12s ease, box-shadow .12s ease;\n      border:1px solid #e5e7eb;\n    }\n    .wp-vertexai-pb .card:hover{ transform: translateY(-3px); box-shadow:0 10px 22px rgba(0,0,0,.08); }\n    .wp-vertexai-pb .card h3{ margin:0 0 10px; font-size:1.12rem; color:#0f172a; }\n    .wp-vertexai-pb .card p{ margin:0; font-size:.96rem; color:#334155; line-height:1.62; }<\/p>\n<p>    \/* Color helpers *\/\n    .bg-blue { border-left-color:#1a73e8 !important; background:#eef6ff !important; }\n    .bg-green{ border-left-color:#10b981 !important; background:#f0fdf4 !important; }\n    .bg-amber{ border-left-color:#f59e0b !important; background:#fffbeb !important; }\n    .bg-violet{ border-left-color:#8b5cf6 !important; background:#f5f3ff !important; }\n    .bg-rose{ border-left-color:#ef4444 !important; background:#fff1f2 !important; }\n    .bg-cyan{ border-left-color:#06b6d4 !important; background:#ecfeff !important; }\n    .bg-lime{ border-left-color:#16a34a !important; background:#f0fdf4 !important; }\n    .bg-orange{ border-left-color:#f97316 !important; background:#fff7ed !important; }\n    .bg-indigo{ border-left-color:#6366f1 !important; background:#eef2ff !important; }\n    .bg-emerald{ border-left-color:#22c55e !important; background:#ecfdf5 !important; }\n    .bg-slate{ border-left-color:#334155 !important; background:#f8fafc !important; }<\/p>\n<p>    \/* Code & utils *\/\n    .tight ul{ margin:0; padding-left:18px; }\n    .tight li{ margin:4px 0; }\n    .mono{ font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace; }\n    .wp-vertexai-pb code{ background:#f1f5f9; padding:0 4px; border-radius:4px; border:1px solid #e2e8f0; }\n    .wp-vertexai-pb pre{\n      background:#f5f5f5; color:#111827; border:1px solid #e5e7eb;\n      padding:12px; border-radius:8px; overflow:auto; font-size:.92rem; line-height:1.55;\n    }\n    .q{font-weight:700;}\n    .qa p{ margin:8px 0; }\n  <\/style>\n<div class=\"wp-vertexai-pb\">\n<div class=\"heading\">\n<h2>Vertex AI Pocket Book \u2014 Uplatz<\/h2>\n<p>50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 Interview Q&amp;A included<\/p>\n<\/p><\/div>\n<div class=\"grid\">\n      <!-- ===================== SECTION 1: OVERVIEW & BUILDING BLOCKS (1\u201310) ===================== --><\/p>\n<div class=\"section-title\">Section 1 \u2014 Overview &#038; Building Blocks<\/div>\n<div class=\"card bg-blue\">\n<h3>1) What is Vertex AI?<\/h3>\n<p>Vertex AI is Google Cloud\u2019s end-to-end ML\/GenAI platform: data prep, training, tuning, deployment, vector search, pipelines, monitoring, and access to Google foundation models (e.g., Gemini) via a unified API. It integrates with BigQuery, Cloud Storage, Dataflow, Pub\/Sub, and GKE. You can bring your own models, fine-tune foundation models, or use AutoML for tabular, vision, and text tasks. Security integrates with IAM, VPC-SC, CMEK, and audit logs.<\/p>\n<pre><code class=\"mono\">pip install google-cloud-aiplatform\r\ngcloud auth application-default login<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-green\">\n<h3>2) Workbench &#038; Notebooks<\/h3>\n<p>Vertex AI Workbench gives managed Jupyter-based notebooks with GCP integration (BigQuery, GCS), idle-shutdown, and one-click GPU\/TPU switching. Attach service accounts with least privilege and place in private subnets if required. Use scheduled notebooks for ETL\/feature jobs when Pipelines isn\u2019t necessary.<\/p>\n<pre><code class=\"mono\">from google.cloud import bigquery\r\nbq = bigquery.Client()\r\nbq.query(\"SELECT COUNT(*) FROM `project.dataset.table`\").result()<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>3) Model Garden &#038; Gemini<\/h3>\n<p>Model Garden exposes foundation models (Gemini family), third-party, and Google-hosted OSS models under a consistent API. You can prompt, tune, and deploy with safety filters and usage controls. Choose Gemini variants for cost\/latency\/quality tradeoffs, and enable caching or streaming outputs as needed.<\/p>\n<pre><code class=\"mono\">from google.cloud import aiplatform\r\nfrom vertexai.preview.generative_models import GenerativeModel\r\naiplatform.init(project=\"YOUR_PROJECT\", location=\"us-central1\")\r\nmodel = GenerativeModel(\"gemini-1.5-pro\")\r\nprint(model.generate_content(\"Summarize this doc: ...\").text)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>4) Endpoints &#038; Deployments<\/h3>\n<p>Deploy your custom models (SavedModel, PyTorch, XGBoost, scikit-learn) to endpoints with autoscaling and traffic splitting. Use A\/B testing, canary rollouts, and model monitoring. Configure minimum\/maximum replicas, health checks, and request\/response logging; attach GPUs for deep learning inference.<\/p>\n<pre><code class=\"mono\">from google.cloud import aiplatform as aip\r\nendpoint = aip.Endpoint.create(display_name=\"churn-endpoint\")\r\nmodel = aip.Model.upload(display_name=\"churn-xgb\", artifact_uri=\"gs:\/\/bucket\/model\/\")\r\nmodel.deploy(endpoint=endpoint, machine_type=\"n1-standard-4\")<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>5) Datasets &#038; BigQuery<\/h3>\n<p>Datasets can live in BigQuery tables or GCS. For tabular ML, BigQuery ML or AutoML Tables style flows work well; for GenAI RAG, store documents in GCS\/BigQuery and index in Vertex AI Vector Search. Keep data residency, encryption, and lineage documented (Data Catalog).<\/p>\n<pre><code class=\"mono\"># Load from BigQuery in Python\r\nimport pandas_gbq\r\ndf = pandas_gbq.read_gbq(\"SELECT * FROM `proj.ds.customers` LIMIT 1000\")<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>6) AutoML: Vision, Text, Tabular<\/h3>\n<p>AutoML trains models from labeled data without heavy ML engineering. Provide labeled datasets and Vertex chooses architectures\/hypers. Evaluate with built-in metrics; export confusion matrices and feature importance for tabular. Use batch prediction or deploy to endpoints.<\/p>\n<pre><code class=\"mono\"># CLI concept\r\ngcloud ai custom-jobs create --region=us-central1 --display-name=automl-vision<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>7) Custom Training with GPUs\/TPUs<\/h3>\n<p>Submit custom jobs using containers or prebuilt frameworks. Scale on multiple accelerators; log metrics to Cloud Logging\/Monitoring. Package code with requirements and stage on GCS. Use Vertex AI Training or GKE for large-scale distributed training with TF\/XLA or PyTorch DDP.<\/p>\n<pre><code class=\"mono\">aip.CustomJob(\r\n  display_name=\"trainer\",\r\n  worker_pool_specs=[{\r\n    \"machine_spec\":{\"machine_type\":\"a2-highgpu-1g\",\"accelerator_type\":\"NVIDIA_TESLA_A100\",\"accelerator_count\":1},\r\n    \"replica_count\":1,\r\n    \"container_spec\":{\"image_uri\":\"gcr.io\/your\/trainer:latest\", \"args\":[\"--epochs\",\"5\"]}\r\n  }]\r\n).run()<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>8) Pipelines (KFP)<\/h3>\n<p>Vertex AI Pipelines (Kubeflow Pipelines on GCP) orchestrates reproducible ML workflows with lineage and caching. Define components, compile to a pipeline spec, and schedule. Artifacts, metadata, and metrics are tracked for compliance\/audit.<\/p>\n<pre><code class=\"mono\">from kfp import dsl\r\n@dsl.component\r\ndef add(a:int,b:int) -&gt; int: return a+b\r\n@dsl.pipeline\r\ndef p(): add(2,3)\r\n# Compile & upload via aiplatform.PipelineJob<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>9) Feature Store &#038; Feast<\/h3>\n<p>Use Vertex AI Feature Store (v2 integrates with BigQuery) to manage offline\/online features with consistency and low-latency serving. Define feature specs, ingest from BQ\/Batch jobs, and serve to models via online stores. For OSS patterns, Feast can pair with Vertex components.<\/p>\n<pre><code class=\"mono\"># Concept: define features then ingest from BigQuery scheduled queries<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>10) Q&#038;A \u2014 \u201cAutoML vs Custom Training?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Use AutoML for quick, strong baselines and when you lack deep ML expertise. Choose Custom Training if you need full control over architectures, libraries, distributed strategies, or specialized loss functions. Many teams start with AutoML, then move to Custom for the last-mile gains.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 2: GENERATIVE AI (11\u201320) ===================== --><\/p>\n<div class=\"section-title\">Section 2 \u2014 Generative AI on Vertex (Gemini, Tuning, Safety, RAG)<\/div>\n<div class=\"card bg-blue\">\n<h3>11) Gemini Text &#038; Multimodal<\/h3>\n<p>Gemini models (text, code, multimodal) power summarization, classification, content generation, tool use, and more. Use streaming for low-latency chat UIs and function calling for tool-augmented agents. Configure safety settings (harassment, hate, etc.) per use case.<\/p>\n<pre><code class=\"mono\">from vertexai.preview.generative_models import GenerativeModel\r\nmodel = GenerativeModel(\"gemini-1.5-flash\")\r\nresp = model.generate_content([\"Explain in 3 bullet points:\", \"Vertex AI Pipelines\"])\r\nprint(resp.text)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-green\">\n<h3>12) Tuning (Adapters\/LoRA)<\/h3>\n<p>Parameter-efficient tuning lets you specialize foundation models with small domain datasets. Provide prompt-completion pairs or instruction datasets; evaluate with held-out metrics and human review. Adapters are applied at inference time, preserving base weights.<\/p>\n<pre><code class=\"mono\"># Pseudocode: submit a tuning job referencing a GCS dataset of JSONL prompts<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>13) Embeddings &#038; Similarity<\/h3>\n<p>Use embeddings to transform text\/images into vectors for semantic search, clustering, and retrieval augmentation. Store vectors in Vertex AI Vector Search (managed ANN) or BigQuery Vector for analytics integration. Choose dimensionality and distance metric appropriately.<\/p>\n<pre><code class=\"mono\">from vertexai.language_models import TextEmbeddingModel\r\nemb = TextEmbeddingModel.from_pretrained(\"textembedding-gecko@001\")\r\nvec = emb.get_embeddings([\"Retrieval Augmented Generation\"])[0].values<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>14) Vector Search<\/h3>\n<p>Vertex AI Vector Search offers low-latency approximate nearest neighbor search with filtering. Create indexes, upsert vectors, and query at runtime. Use for RAG, recommendations, and deduplication. Keep metadata (doc_id, source) for attribution and traceability.<\/p>\n<pre><code class=\"mono\"># Concept: create index, upsert embeddings, then query with a vector and filter<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>15) RAG with Vertex<\/h3>\n<p>Implement RAG by chunking documents from GCS\/BigQuery, generating embeddings, storing in Vector Search, and retrieving top-k contexts to ground prompts. Add citation links, metadata filters, and freshness signals. Cache retrievals to reduce cost\/latency.<\/p>\n<pre><code class=\"mono\"># Pseudocode: chunks -&gt; embed -&gt; upsert -&gt; query -&gt; prompt(model, context)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>16) Tool Use &#038; Function Calling<\/h3>\n<p>Define tools (functions with JSON schemas). The model can propose a tool call; your app executes it and returns results to the model for final completion. Useful for database lookup, web calls, and transactional flows with human-in-the-loop safeguards.<\/p>\n<pre><code class=\"mono\"># Concept: tools=[{\"name\":\"getWeather\",\"schema\":{...}}] passed to generate_content()<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>17) Safety, Data Governance &#038; Grounding<\/h3>\n<p>Use safety filters, blocklists, PII redaction, and prompt hardening. Disable data logging if required, and configure CMEK\/VPC-SC. For factual tasks, ground outputs via RAG and include citations. Add evals and human review queues for sensitive domains.<\/p>\n<pre><code class=\"mono\"># Configure safety: parameters in generate_content(), server-side policies via console<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>18) Prompt Engineering<\/h3>\n<p>Structure prompts with role, instructions, constraints, examples, and format expectations. Use delimiters for context, ask for JSON output with a schema, and chain prompts for complex tasks. Add system prompts for style\/voice and few-shot examples to steer behavior.<\/p>\n<pre><code class=\"mono\">prompt = \"\"\"You are a concise assistant.\r\nConstraints: bullet list, 3 items.\r\nTopic: Vertex AI safety controls.\"\"\"<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>19) GenAI Evaluation<\/h3>\n<p>Evaluate with automatic metrics (BLEU\/ROUGE for text, embedding similarity) and human ratings (helpfulness, harmlessness, honesty). Use golden sets and adversarial tests. Track drift, hallucination rates, and safety policy violations over time with dashboards.<\/p>\n<pre><code class=\"mono\"># Store evals in BigQuery for analysis & dashboards<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>20) Q&#038;A \u2014 \u201cWhen to choose Gemini Flash vs Pro?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Use Flash for low-latency, high-throughput tasks (UI autocomplete, quick summaries). Choose Pro for higher reasoning quality, complex instructions, or longer context. Benchmark your tasks\u2014often a hybrid (Flash for previews, Pro for finalize) balances UX and cost.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 3: MLOPS & PIPELINES (21\u201330) ===================== --><\/p>\n<div class=\"section-title\">Section 3 \u2014 MLOps: CI\/CD, Monitoring, Lineage, Cost<\/div>\n<div class=\"card bg-blue\">\n<h3>21) CI\/CD for Models<\/h3>\n<p>Adopt Git-based workflows with Cloud Build\/GitHub Actions to build, test, and deploy models\/pipelines. Store artifacts in Artifact Registry. Use environments (dev\/stage\/prod) with approvals and canaries. Version datasets, code, and model weights.<\/p>\n<pre><code class=\"mono\"># Cloud Build step (concept)\r\ngcloud ai models upload --display-name my-model --artifact-uri gs:\/\/bucket\/model<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-green\">\n<h3>22) Model Registry<\/h3>\n<p>Register models with metadata (version, metrics, lineage). Promote through stages, attach evaluation reports, and tie to endpoints. Enforce checks (schema, bias, safety) before production. Keep changelogs and rollback plans.<\/p>\n<pre><code class=\"mono\"># Track versions and link to PipelineJob runs & datasets<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>23) Data &#038; Model Lineage<\/h3>\n<p>Vertex ML Metadata records artifacts, executions, and contexts. This enables audits and reproducibility (\u201cthis model came from dataset X via pipeline Y\u201d). Integrate with Data Catalog for dataset governance.<\/p>\n<pre><code class=\"mono\"># Access lineage via Vertex console or Metadata APIs<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>24) Model Monitoring<\/h3>\n<p>Monitor prediction drift, data skew, and performance. Configure alerting via Cloud Monitoring. Capture request\/response samples, compute feature statistics, and trigger re-training when drift exceeds thresholds. For GenAI, log prompt\/response pairs for safety and quality reviews.<\/p>\n<pre><code class=\"mono\"># Concept: enable logging\/monitoring on Endpoint; export to BigQuery<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>25) Batch vs Online Prediction<\/h3>\n<p>Batch prediction is cost-efficient for large offline scoring jobs (e.g., nightly segments). Online endpoints serve latency-sensitive requests. Often both coexist: batch for bulk updates, online for personalization at request time. Keep model code identical across modes.<\/p>\n<pre><code class=\"mono\"># Batch predict\r\naip.BatchPredictionJob.create(job_display_name=\"score\", model=model.resource_name, instances_format=\"jsonl\", gcs_source=...)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>26) Cost Controls<\/h3>\n<p>Right-size machines\/accelerators, enable autoscaling, and set quotas\/budgets. Cache embeddings, reuse vector indexes, and stream responses. For pipelines, enable caching and shut down idle resources. Track per-project costs in BigQuery billing exports.<\/p>\n<pre><code class=\"mono\"># Budgets & alerts in Cloud Billing; labels for cost allocation<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>27) Testing ML Systems<\/h3>\n<p>Unit test data transformations; integration test pipelines; canary test endpoints. Maintain golden datasets and backtesting harnesses. For GenAI, build red-team suites and toxicity\/factuality tests. Automate in CI.<\/p>\n<pre><code class=\"mono\"># pytest + sample JSONL prompts; assert structure & policy scores<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>28) Governance &#038; Responsible AI<\/h3>\n<p>Document model cards, data sheets, and intended use. Apply bias checks, consent\/logging controls, PII handling, and safety filters. Provide user controls for opt-out and human escalation. Record decisions for audits.<\/p>\n<pre><code class=\"mono\"># Store governance artifacts in GCS with signed URLs for reviews<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>29) Hybrid &#038; Private Networking<\/h3>\n<p>Access on-prem\/private data via Private Service Connect\/VPC-SC. Place endpoints in regions near users\/data. Use CMEK for encryption and restrict egress with Cloud NAT + firewall rules. For strict environments, add approval gates.<\/p>\n<pre><code class=\"mono\"># VPC-SC perimeter with restricted services & projects<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>30) Q&#038;A \u2014 \u201cHow do I trigger retraining safely?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Monitor drift and performance thresholds; when breached, kick off a PipelineJob that validates data schema, runs training, evaluates against golden sets, compares against champion, and only promotes if it beats guardrails. Use canary deployments with shadow traffic before full cutover.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 4: INTEGRATIONS & PATTERNS (31\u201340) ===================== --><\/p>\n<div class=\"section-title\">Section 4 \u2014 Integrations, Data Patterns, Examples<\/div>\n<div class=\"card bg-blue\">\n<h3>31) BigQuery ML vs Vertex AI<\/h3>\n<p>BQML trains models directly in SQL (linear, boosted trees, deep nets, ARIMA, XGBoost). Great for analysts and fast iteration. Vertex AI is better for custom training, GenAI, feature stores, vector search, and full MLOps. Combine them: train in BQML, serve via Vertex endpoints if needed.<\/p>\n<pre><code class=\"mono\">CREATE OR REPLACE MODEL ds.churn OPTIONS(MODEL_TYPE='LOGISTIC_REG') AS SELECT ...;<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-green\">\n<h3>32) Dataflow &#038; ETL<\/h3>\n<p>Use Dataflow (Apache Beam) for scalable ETL\/feature pipelines. Stream from Pub\/Sub to BigQuery and Feature Store; embed featurization and windowing. Keep schemas versioned and test with synthetic data.<\/p>\n<pre><code class=\"mono\"># Python Beam skeleton reading Pub\/Sub and writing to BQ<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>33) Pub\/Sub for Real-time Scoring<\/h3>\n<p>Publish events to Pub\/Sub; trigger Cloud Functions\/Run that call Vertex endpoints. Add retries with dead-letter topics, idempotency, and timeout budgets. Log request IDs for traceability.<\/p>\n<pre><code class=\"mono\"># Cloud Run handler calls endpoint.predict(payload)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>34) Cloud Run GenAI APIs<\/h3>\n<p>Wrap Gemini calls in a Cloud Run microservice with auth, rate limits, and caching. Stream SSE to the frontend for typing-effect UX. Keep system prompts in Config; rotate keys and apply quotas per tenant.<\/p>\n<pre><code class=\"mono\"># Flask\/FastAPI + vertexai SDK + streaming response<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>35) Document AI + RAG<\/h3>\n<p>Extract text\/structure with Document AI, chunk and index in Vector Search, then build a Gemini-powered Q&#038;A over your PDFs. Keep provenance links and confidence scores; redact PII if needed.<\/p>\n<pre><code class=\"mono\"># Pipeline: GCS -&gt; DocAI -&gt; chunks -&gt; embeddings -&gt; index<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>36) Images &#038; Vision APIs<\/h3>\n<p>Use Vertex Vision models for classification\/detection or tune for your labels. For generative images, call the appropriate model endpoints (when available in your region). Store prompts\/outputs for audit and safety review.<\/p>\n<pre><code class=\"mono\"># Upload labeled images to GCS, start AutoML Vision training<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>37) Code Assist &#038; Agents<\/h3>\n<p>Build internal code assistants with Gemini Code models, function calling, and repository retrieval. Add guardrails (no secrets), explain diffs, and propose patches. For support bots, combine conversational state with RAG and action tools (ticket systems).<\/p>\n<pre><code class=\"mono\"># Tool-enabled chat pipeline with user\/session context<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>38) Evaluations at Scale<\/h3>\n<p>Run batch evals in Pipelines against curated prompts. Capture metrics (accuracy, toxicity, refusal rates), store in BigQuery, and visualize in Looker Studio. Automate regression checks on every model or prompt change.<\/p>\n<pre><code class=\"mono\"># Pipeline step writes eval JSONL to BigQuery for dashboards<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>39) Multi-Region &#038; DR<\/h3>\n<p>Choose regions close to data\/users; replicate artifacts and indexes. Use separate projects for isolation, per-env service accounts, and org policies. Test failovers and rate-limit fallbacks in client apps.<\/p>\n<pre><code class=\"mono\"># Artifact Registry mirrors; dual endpoints with traffic split<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>40) Q&#038;A \u2014 \u201cHow to secure GenAI endpoints?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Enforce IAM, private networking (PSC), request auth (ID tokens), quotas, and per-tenant limits. Apply safety filters, prompt wrapping, output validation (JSON schemas), and PII redaction. Log prompts\/outputs with data retention controls and build abuse monitoring.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 5: CHEATS, PITFALLS & INTERVIEW Q&A (41\u201350) ===================== --><\/p>\n<div class=\"section-title\">Section 5 \u2014 Cheats, Pitfalls, Interview Q&#038;A<\/div>\n<div class=\"card bg-blue\">\n<h3>41) Quickstart: Create Endpoint &#038; Predict (Python)<\/h3>\n<p>Initialize, upload, deploy, predict; tear down when done to save cost. Ensure service account has <code>aiplatform.user<\/code> and storage access; set region explicitly.<\/p>\n<pre><code class=\"mono\">from google.cloud import aiplatform as aip\r\naip.init(project=\"P\", location=\"us-central1\")\r\nm = aip.Model.upload(display_name=\"clf\", artifact_uri=\"gs:\/\/bucket\/model\")\r\nep = aip.Endpoint.create(display_name=\"clf-ep\")\r\nm.deploy(endpoint=ep, machine_type=\"n1-standard-2\")\r\nprint(ep.predict(instances=[{\"x\":1,\"y\":2}]).predictions)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-green\">\n<h3>42) Quickstart: Gemini REST (curl)<\/h3>\n<p>Call the generative endpoint via REST with OAuth access token. Prefer server-to-server calls; never expose tokens in the browser. Stream when building chats.<\/p>\n<pre><code class=\"mono\">curl -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\r\n  -H \"Content-Type: application\/json\" \\\r\n  https:\/\/...\/projects\/PROJECT\/locations\/us-central1\/publishers\/google\/models\/gemini-1.5-pro:generateContent \\\r\n  -d '{\"contents\":[{\"role\":\"user\",\"parts\":[{\"text\":\"Explain Vertex AI Pipelines\"}]}]}'<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>43) Prompt Template Pattern<\/h3>\n<p>Keep prompts in files with placeholders; inject variables server-side. Version prompts and track A\/B results. For JSON outputs, validate against a schema and retry with error-aware prompts.<\/p>\n<pre><code class=\"mono\">template = \"\"\"Role: helpful assistant.\r\nOutput JSON with keys: steps[], risks[].\r\nTopic: {topic}\"\"\"\r\nprompt = template.format(topic=\"Model Monitoring\")<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>44) RAG Chunking &#038; Metadata<\/h3>\n<p>Chunk by semantic boundaries, store title\/section\/source_url, and use hybrid retrieval (BM25 + vectors). Re-rank candidates before prompting. Add citations in the final answer and cache results.<\/p>\n<pre><code class=\"mono\"># Store metadata alongside vectors: {\"doc_id\": \"...\", \"section\": \"...\", \"url\": \"...\"}<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>45) Latency Tactics<\/h3>\n<p>Choose closer region, use \u201cflash\u201d models when feasible, enable streaming, cache embeddings, reuse HTTP connections, and pre-warm endpoints. For pipelines, use caching and parallelism; avoid tiny batch sizes.<\/p>\n<pre><code class=\"mono\"># HTTP keep-alive + connection pooling in your client<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>46) Cost Tactics<\/h3>\n<p>Batch non-urgent requests, cap max tokens, set per-user quotas, and use retrieval caches. Downshift model tiers for drafts and upgrade for finalization. Delete unused endpoints and indexes.<\/p>\n<pre><code class=\"mono\"># Track cost by labels: project, team, app, environment<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>47) Common Pitfalls<\/h3>\n<p>Forgetting region in SDK calls, mixing projects\/SA scopes, leaving endpoints running, no safety filters, missing retries\/timeouts, and unbounded prompt sizes. Fix with client wrappers, guardrails, and budgets.<\/p>\n<pre><code class=\"mono\"># Always set: aiplatform.init(project=\"...\", location=\"...\")<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>48) Production Checklist<\/h3>\n<p>IAM + network controls, logging\/metrics\/traces, eval gates, canary rollouts, budget alerts, data retention, incident runbooks, and continuous red-teaming.<\/p>\n<pre><code class=\"mono\"># Cloud Monitoring alerts: p95 latency, error rate, token spend<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>49) Reference Patterns<\/h3>\n<p>1) GenAI Chat with tools + RAG. 2) Classification endpoint + batch scoring. 3) Image AutoML + online detection. 4) Recommender with embeddings + vector search. 5) Code assistant with repo retrieval.<\/p>\n<pre><code class=\"mono\"># Start simple, measure, iterate; codify patterns as reusable microservices<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald qa\">\n<h3>50) Interview Q&amp;A \u2014 20 Practical Questions (Expanded)<\/h3>\n<p><span class=\"q\">1)<\/span> Vertex AI vs BQML? \u2014 BQML for SQL-native modeling; Vertex for custom training, GenAI, vector search, and MLOps.<\/p>\n<p><span class\n\n<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Vertex AI Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 Interview Q&amp;A included Section 1 \u2014 Overview &#038; Building Blocks 1) What is Vertex <span class=\"readmore\"><a href=\"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/\">Read More &#8230;<\/a><\/span><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2462,2465],"tags":[],"class_list":["post-4427","post","type-post","status-publish","format-standard","hentry","category-pocket-book","category-vertex-ai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google Vertex AI Pocket Book | Uplatz Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google Vertex AI Pocket Book | Uplatz Blog\" \/>\n<meta property=\"og:description\" content=\"Vertex AI Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 Interview Q&amp;A included Section 1 \u2014 Overview &#038; Building Blocks 1) What is Vertex Read More ...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/\" \/>\n<meta property=\"og:site_name\" content=\"Uplatz Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Uplatz-1077816825610769\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-09T12:42:14+00:00\" \/>\n<meta name=\"author\" content=\"uplatzblog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@uplatz_global\" \/>\n<meta name=\"twitter:site\" content=\"@uplatz_global\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"uplatzblog\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/\"},\"author\":{\"name\":\"uplatzblog\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/person\\\/8ecae69a21d0757bdb2f776e67d2645e\"},\"headline\":\"Google Vertex AI Pocket Book\",\"datePublished\":\"2025-08-09T12:42:14+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/\"},\"wordCount\":1931,\"publisher\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\"},\"articleSection\":[\"Pocket Book\",\"Vertex AI\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/\",\"name\":\"Google Vertex AI Pocket Book | Uplatz Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#website\"},\"datePublished\":\"2025-08-09T12:42:14+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/google-vertex-ai-pocket-book\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google Vertex AI Pocket Book\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\",\"name\":\"Uplatz Blog\",\"description\":\"Uplatz is a global IT Training &amp; Consulting company\",\"publisher\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\",\"name\":\"uplatz.com\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/11\\\/Uplatz-Logo-Copy-2.png\",\"contentUrl\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/11\\\/Uplatz-Logo-Copy-2.png\",\"width\":1280,\"height\":800,\"caption\":\"uplatz.com\"},\"image\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/Uplatz-1077816825610769\\\/\",\"https:\\\/\\\/x.com\\\/uplatz_global\",\"https:\\\/\\\/www.instagram.com\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/7956715?trk=tyah&amp;amp;amp;amp;trkInfo=clickedVertical:company,clickedEntityId:7956715,idx:1-1-1,tarId:1464353969447,tas:uplatz\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/person\\\/8ecae69a21d0757bdb2f776e67d2645e\",\"name\":\"uplatzblog\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"caption\":\"uplatzblog\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google Vertex AI Pocket Book | Uplatz Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/","og_locale":"en_US","og_type":"article","og_title":"Google Vertex AI Pocket Book | Uplatz Blog","og_description":"Vertex AI Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 Interview Q&amp;A included Section 1 \u2014 Overview &#038; Building Blocks 1) What is Vertex Read More ...","og_url":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/","og_site_name":"Uplatz Blog","article_publisher":"https:\/\/www.facebook.com\/Uplatz-1077816825610769\/","article_published_time":"2025-08-09T12:42:14+00:00","author":"uplatzblog","twitter_card":"summary_large_image","twitter_creator":"@uplatz_global","twitter_site":"@uplatz_global","twitter_misc":{"Written by":"uplatzblog"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/#article","isPartOf":{"@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/"},"author":{"name":"uplatzblog","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/person\/8ecae69a21d0757bdb2f776e67d2645e"},"headline":"Google Vertex AI Pocket Book","datePublished":"2025-08-09T12:42:14+00:00","mainEntityOfPage":{"@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/"},"wordCount":1931,"publisher":{"@id":"https:\/\/uplatz.com\/blog\/#organization"},"articleSection":["Pocket Book","Vertex AI"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/","url":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/","name":"Google Vertex AI Pocket Book | Uplatz Blog","isPartOf":{"@id":"https:\/\/uplatz.com\/blog\/#website"},"datePublished":"2025-08-09T12:42:14+00:00","breadcrumb":{"@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/uplatz.com\/blog\/google-vertex-ai-pocket-book\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/uplatz.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Google Vertex AI Pocket Book"}]},{"@type":"WebSite","@id":"https:\/\/uplatz.com\/blog\/#website","url":"https:\/\/uplatz.com\/blog\/","name":"Uplatz Blog","description":"Uplatz is a global IT Training &amp; Consulting company","publisher":{"@id":"https:\/\/uplatz.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/uplatz.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/uplatz.com\/blog\/#organization","name":"uplatz.com","url":"https:\/\/uplatz.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/uplatz.com\/blog\/wp-content\/uploads\/2016\/11\/Uplatz-Logo-Copy-2.png","contentUrl":"https:\/\/uplatz.com\/blog\/wp-content\/uploads\/2016\/11\/Uplatz-Logo-Copy-2.png","width":1280,"height":800,"caption":"uplatz.com"},"image":{"@id":"https:\/\/uplatz.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/Uplatz-1077816825610769\/","https:\/\/x.com\/uplatz_global","https:\/\/www.instagram.com\/","https:\/\/www.linkedin.com\/company\/7956715?trk=tyah&amp;amp;amp;amp;trkInfo=clickedVertical:company,clickedEntityId:7956715,idx:1-1-1,tarId:1464353969447,tas:uplatz"]},{"@type":"Person","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/person\/8ecae69a21d0757bdb2f776e67d2645e","name":"uplatzblog","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","caption":"uplatzblog"}}]}},"_links":{"self":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4427","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/comments?post=4427"}],"version-history":[{"count":1,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4427\/revisions"}],"predecessor-version":[{"id":4428,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4427\/revisions\/4428"}],"wp:attachment":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/media?parent=4427"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/categories?post=4427"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/tags?post=4427"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}