{"id":4431,"date":"2025-08-09T13:45:23","date_gmt":"2025-08-09T13:45:23","guid":{"rendered":"https:\/\/uplatz.com\/blog\/?p=4431"},"modified":"2025-08-09T13:59:24","modified_gmt":"2025-08-09T13:59:24","slug":"lightgbm-pocket-book","status":"publish","type":"post","link":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/","title":{"rendered":"LightGBM Pocket Book"},"content":{"rendered":"<p><!-- LightGBM Pocket Book \u2014 Uplatz (50 Cards, Wide Layout, Readable Code, Scoped Styles) --><\/p>\n<div style=\"margin:16px 0;\">\n<style>\n    .wp-lgbm-pb { font-family: Arial, sans-serif; max-width: 1320px; margin:0 auto; }\n    .wp-lgbm-pb .heading{\n      background: linear-gradient(135deg, #ecfdf5, #e0f2fe); \/* light green -> light blue *\/\n      color:#0f172a; padding:22px 24px; border-radius:14px;\n      text-align:center; margin-bottom:18px; box-shadow:0 8px 20px rgba(0,0,0,.08);\n      border:1px solid #cbd5e1;\n    }\n    .wp-lgbm-pb .heading h2{ margin:0; font-size:2.1rem; letter-spacing:.2px; }\n    .wp-lgbm-pb .heading p{ margin:6px 0 0; font-size:1.02rem; opacity:.9; }<\/p>\n<p>    \/* Wide, dense grid *\/\n    .wp-lgbm-pb .grid{\n      display:grid; gap:14px;\n      grid-template-columns: repeat(auto-fill, minmax(400px, 1fr));\n    }\n    @media (min-width:1200px){\n      .wp-lgbm-pb .grid{ grid-template-columns: repeat(3, 1fr); }\n    }<\/p>\n<p>    .wp-lgbm-pb .section-title{\n      grid-column:1\/-1; background:#f8fafc; border-left:8px solid #16a34a; \/* green *\/\n      padding:12px 16px; border-radius:10px; font-weight:700; color:#0f172a; font-size:1.08rem;\n      box-shadow:0 2px 8px rgba(0,0,0,.05); border:1px solid #e2e8f0;\n    }\n    .wp-lgbm-pb .card{\n      background:#ffffff; border-left:6px solid #16a34a;\n      padding:18px; border-radius:12px;\n      box-shadow:0 6px 14px rgba(0,0,0,.06);\n      transition:transform .12s ease, box-shadow .12s ease;\n      border:1px solid #e5e7eb;\n    }\n    .wp-lgbm-pb .card:hover{ transform: translateY(-3px); box-shadow:0 10px 22px rgba(0,0,0,.08); }\n    .wp-lgbm-pb .card h3{ margin:0 0 10px; font-size:1.12rem; color:#0f172a; }\n    .wp-lgbm-pb .card p{ margin:0; font-size:.96rem; color:#334155; line-height:1.62; }<\/p>\n<p>    \/* Color helpers *\/\n    .bg-blue { border-left-color:#0ea5e9 !important; background:#eef6ff !important; }\n    .bg-green{ border-left-color:#16a34a !important; background:#f0fdf4 !important; }\n    .bg-amber{ border-left-color:#f59e0b !important; background:#fffbeb !important; }\n    .bg-violet{ border-left-color:#8b5cf6 !important; background:#f5f3ff !important; }\n    .bg-rose{ border-left-color:#ef4444 !important; background:#fff1f2 !important; }\n    .bg-cyan{ border-left-color:#06b6d4 !important; background:#ecfeff !important; }\n    .bg-lime{ border-left-color:#22c55e !important; background:#ecfdf5 !important; }\n    .bg-orange{ border-left-color:#f97316 !important; background:#fff7ed !important; }\n    .bg-indigo{ border-left-color:#6366f1 !important; background:#eef2ff !important; }\n    .bg-emerald{ border-left-color:#10b981 !important; background:#ecfdf5 !important; }\n    .bg-slate{ border-left-color:#334155 !important; background:#f8fafc !important; }<\/p>\n<p>    \/* Utilities & code *\/\n    .tight ul{ margin:0; padding-left:18px; }\n    .tight li{ margin:4px 0; }\n    .mono{ font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace; }\n    .wp-lgbm-pb code{ background:#f1f5f9; padding:0 4px; border-radius:4px; border:1px solid #e2e8f0; }\n    .wp-lgbm-pb pre{\n      background:#f5f5f5; color:#111827; border:1px solid #e5e7eb;\n      padding:12px; border-radius:8px; overflow:auto; font-size:.92rem; line-height:1.55;\n    }\n    .q{font-weight:700;}\n    .qa p{ margin:8px 0; }\n  <\/style>\n<div class=\"wp-lgbm-pb\">\n<div class=\"heading\">\n<h2>LightGBM Pocket Book \u2014 Uplatz<\/h2>\n<p>50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 20+ Interview Q&amp;A included<\/p>\n<\/p><\/div>\n<div class=\"grid\">\n      <!-- ===================== SECTION 1: FOUNDATIONS (1\u201310) ===================== --><\/p>\n<div class=\"section-title\">Section 1 \u2014 Foundations<\/div>\n<div class=\"card bg-green\">\n<h3>1) What is LightGBM?<\/h3>\n<p>LightGBM (by Microsoft) is a gradient boosting framework using tree-based learners optimized with histogram-based splitting, leaf-wise growth, and smart sampling. It\u2019s known for speed, memory efficiency, and strong accuracy on tabular data. Supports regression, binary\/multiclass classification, and ranking (LambdaRank).<\/p>\n<pre><code class=\"mono\">pip install lightgbm\r\n# Optional: GPU build requires proper toolchain & GPU libs<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-blue\">\n<h3>2) Why LightGBM vs XGBoost\/CatBoost?<\/h3>\n<p>LightGBM\u2019s histogram algorithm + leaf-wise growth can be very fast and accurate, especially on large, sparse datasets. XGBoost is a strong baseline with breadth of features and robust CPU\/GPU; CatBoost excels with categorical handling out-of-the-box. Pick based on data shape, categorical complexity, and infra.<\/p>\n<pre><code class=\"mono\"># scikit-learn API usage\r\nfrom lightgbm import LGBMClassifier\r\nclf = LGBMClassifier(n_estimators=500, learning_rate=0.05, num_leaves=64)\r\nclf.fit(X_train, y_train, eval_set=[(X_val,y_val)], eval_metric='auc', early_stopping_rounds=50)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>3) Core Ideas: Histograms, Leaf-wise Growth<\/h3>\n<p>Features are bucketed into histograms to compute split gains quickly. LightGBM grows trees leaf-wise (best-first): it expands the leaf with the highest loss reduction. This can improve accuracy but risks overfitting without constraints (e.g., <code>num_leaves<\/code>, <code>min_data_in_leaf<\/code>).<\/p>\n<pre><code class=\"mono\">params = {\"boosting_type\":\"gbdt\",\"num_leaves\":63,\"max_depth\":-1,\"min_data_in_leaf\":20}<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>4) Objectives &#038; Metrics<\/h3>\n<p>Common objectives: <code>regression<\/code>, <code>binary<\/code>, <code>multiclass<\/code>, <code>lambdarank<\/code>. Metrics: <code>rmse<\/code>, <code>mae<\/code>, <code>auc<\/code>, <code>logloss<\/code>, <code>multi_logloss<\/code>, <code>ndcg<\/code>, <code>map<\/code>. Set both explicitly to track the right signal.<\/p>\n<pre><code class=\"mono\">params = {\"objective\":\"binary\",\"metric\":[\"auc\",\"binary_logloss\"]}<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>5) Sklearn API vs Native API<\/h3>\n<p>Sklearn wrappers (<code>LGBMClassifier<\/code>, <code>LGBMRegressor<\/code>, etc.) integrate nicely with pipelines and CV. Native API uses <code>lgb.Dataset<\/code> and <code>lgb.train<\/code> for fine control (multiple validation sets, callbacks, custom objectives\/metrics).<\/p>\n<pre><code class=\"mono\">import lightgbm as lgb\r\ndtrain = lgb.Dataset(X_train, label=y_train, free_raw_data=False)\r\nbst = lgb.train(params, dtrain, num_boost_round=2000, valid_sets=[dtrain], valid_names=[\"train\"])<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>6) Handling Missing Values<\/h3>\n<p>LightGBM natively handles NaNs; it learns the optimal direction for missing values at each split. You generally don\u2019t need imputation unless it benefits downstream features. If imputing, encode \u201cmissingness\u201d separately so information isn\u2019t lost.<\/p>\n<pre><code class=\"mono\"># NaNs allowed in X; LightGBM handles them internally<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>7) Categorical Features<\/h3>\n<p>Provide integer-encoded categorical columns and mark them with <code>categorical_feature<\/code>. LightGBM uses optimal splits with built-in category handling (no one-hot needed). Keep categories stable across train\/valid\/test.<\/p>\n<pre><code class=\"mono\">cat_cols = [\"country\",\"device\",\"plan\"]\r\nfor c in cat_cols: X_train[c] = X_train[c].astype(\"category\")\r\nclf = LGBMClassifier(categorical_feature=cat_cols)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>8) Imbalance: is_unbalance &#038; class_weight<\/h3>\n<p>For skewed classes, set <code>is_unbalance=true<\/code> or <code>class_weight={'0':1,'1':w}<\/code>. Choose one (not both). Also consider focal loss (custom) or proper sampling strategies.<\/p>\n<pre><code class=\"mono\">LGBMClassifier(is_unbalance=True)  # or class_weight={\"0\":1,\"1\":10}<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>9) Early Stopping &#038; Validation<\/h3>\n<p>Use a validation set and <code>early_stopping_rounds<\/code> to stop when metric doesn\u2019t improve, then reuse <code>best_iteration_<\/code> for predictions. Keep validation distribution realistic (time-aware split when appropriate).<\/p>\n<pre><code class=\"mono\">clf.fit(X_tr, y_tr, eval_set=[(X_val,y_val)], eval_metric=\"auc\", early_stopping_rounds=100)\r\ny_pred = clf.predict_proba(X_te, num_iteration=clf.best_iteration_)[:,1]<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>10) Q&amp;A \u2014 \u201cWhy does LightGBM overfit sometimes?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Leaf-wise growth can create deep, unbalanced trees capturing noise. Control with <code>num_leaves<\/code>, <code>min_data_in_leaf<\/code>, <code>max_depth<\/code>, <code>feature_fraction<\/code>, <code>bagging_fraction<\/code>, and early stopping. Always validate on held-out data.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 2: DATA & FEATURES (11\u201320) ===================== --><\/p>\n<div class=\"section-title\">Section 2 \u2014 Data Preparation &#038; Feature Engineering<\/div>\n<div class=\"card bg-green\">\n<h3>11) Dataset &#038; Freeing Memory<\/h3>\n<p>With native API, build <code>lgb.Dataset<\/code> for train\/valid. Use <code>free_raw_data=False<\/code> if you need raw features later. Reuse constructed bins when creating validation sets to speed up.<\/p>\n<pre><code class=\"mono\">dtrain = lgb.Dataset(X_tr, y_tr, free_raw_data=False)\r\ndval   = lgb.Dataset(X_val, y_val, reference=dtrain)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-blue\">\n<h3>12) Feature Binning<\/h3>\n<p>LightGBM discretizes features into bins (controlled by <code>max_bin<\/code>). More bins capture finer splits but increase memory\/time. Typical defaults work; increase cautiously if you see underfitting on continuous features.<\/p>\n<pre><code class=\"mono\">params[\"max_bin\"] = 255  # try 255, 511 for more granularity<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>13) High Cardinality Categoricals<\/h3>\n<p>LightGBM can handle high-cardinality categoricals without one-hot, but performance may degrade. Consider frequency\/target encoding (carefully, with CV), grouping rare categories, or hashing to reduce noise.<\/p>\n<pre><code class=\"mono\"># Frequency encode before marking as category (optionally)\r\nfreq = X_tr['city'].value_counts()\r\nX_tr['city_freq'] = X_tr['city'].map(freq)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>14) Text &#038; Dates<\/h3>\n<p>For text, build numeric features (TF-IDF, embeddings) then feed to LightGBM. For timestamps, extract calendar and lag features. Beware data leakage: compute statistics (means, counts) within CV folds only.<\/p>\n<pre><code class=\"mono\">X[\"hour\"] = pd.to_datetime(X[\"ts\"]).dt.hour\r\nX[\"dow\"]  = pd.to_datetime(X[\"ts\"]).dt.dayofweek<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>15) Monotonic Constraints<\/h3>\n<p>Enforce monotone relationships (e.g., price \u2191 \u2192 risk \u2191). Provide array of <code>-1\/0\/+1<\/code> for each feature. Be sure features are scaled consistently with the constraint direction.<\/p>\n<pre><code class=\"mono\">params[\"monotone_constraints\"] = [1, 0, -1, 0, ...]  # length == n_features<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>16) Interaction Constraints (Practical Tip)<\/h3>\n<p>Constrain which features can interact (co-occur in splits) to reduce spurious rules and overfitting. Define feature groups that can split together; others cannot. Use sparingly for domain rules.<\/p>\n<pre><code class=\"mono\"># Example conceptual usage (check your LightGBM version support)\r\n# params[\"interaction_constraints\"] = [[\"age\",\"income\"], [\"country\",\"device\"]]<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>17) Grouped \/ Time-Aware Splits<\/h3>\n<p>For time series and groups (users\/sessions), use time-based splits or group-aware CV to avoid leakage. LightGBM doesn\u2019t enforce this \u2014 you must split appropriately before fitting.<\/p>\n<pre><code class=\"mono\"># sklearn TimeSeriesSplit or GroupKFold<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>18) Label Encoding vs One-Hot<\/h3>\n<p>Prefer integer-encoding for categorical features with <code>categorical_feature<\/code> set. One-hot can work but increases dimensionality. If using one-hot, beware of high cardinality and sparse effects.<\/p>\n<pre><code class=\"mono\"># pandas Categorical to integers is fine\r\nX[c] = X[c].astype(\"category\")<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>19) Feature Importance<\/h3>\n<p>Types: <b>gain<\/b> (total split gain), <b>split<\/b> (frequency). Gain is more informative. Combine with permutation importance and SHAP for robust insights.<\/p>\n<pre><code class=\"mono\">import numpy as np\r\nimp = clf.booster_.feature_importance(importance_type='gain')\r\nnames = clf.booster_.feature_name()<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>20) Q&amp;A \u2014 \u201cShould I scale features?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Tree models don\u2019t need scaling\/normalization. However, scaling can help with monotone constraints\u2019 direction sanity or if you combine with linear models. Generally, skip scaling for LightGBM.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 3: TRAINING & HYPERPARAMETERS (21\u201330) ===================== --><\/p>\n<div class=\"section-title\">Section 3 \u2014 Training, Hyperparameters &#038; Regularization<\/div>\n<div class=\"card bg-green\">\n<h3>21) num_leaves &#038; max_depth<\/h3>\n<p><code>num_leaves<\/code> controls model complexity (higher = more complex). <code>max_depth<\/code> can cap tree depth. A rule of thumb: <code>num_leaves \u2248 2^(max_depth)<\/code>. Start modestly (31\u2013127) and tune.<\/p>\n<pre><code class=\"mono\">params.update(num_leaves=63, max_depth=-1)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-blue\">\n<h3>22) min_data_in_leaf &#038; min_sum_hessian_in_leaf<\/h3>\n<p>Regularize leaves by requiring a minimum number of samples or total Hessian. Increase to reduce overfitting on small patterns\/noise.<\/p>\n<pre><code class=\"mono\">params.update(min_data_in_leaf=30, min_sum_hessian_in_leaf=1e-3)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>23) Learning Rate &#038; Estimators<\/h3>\n<p>Small <code>learning_rate<\/code> with larger <code>n_estimators<\/code> improves generalization but costs time. Typical: 0.03\u20130.1. Use early stopping to avoid guessing <code>n_estimators<\/code>.<\/p>\n<pre><code class=\"mono\">LGBMClassifier(learning_rate=0.05, n_estimators=5000)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>24) Subsampling: feature_fraction &#038; bagging_fraction<\/h3>\n<p>Randomly sample features and rows per iteration for regularization and speed. Pair with <code>bagging_freq<\/code> (e.g., 1 for every iteration).<\/p>\n<pre><code class=\"mono\">params.update(feature_fraction=0.8, bagging_fraction=0.8, bagging_freq=1)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>25) L1\/L2 Regularization<\/h3>\n<p><code>lambda_l1<\/code> and <code>lambda_l2<\/code> penalize leaf scores to reduce variance. Start with small values (e.g., 0\u20131). Tune alongside leaves and min_data_in_leaf.<\/p>\n<pre><code class=\"mono\">params.update(lambda_l1=0.0, lambda_l2=1.0)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>26) Boosting Types: gbdt, dart, rf, goss<\/h3>\n<p><b>gbdt<\/b> is standard gradient boosting. <b>dart<\/b> drops trees (like dropout) to reduce overfitting. <b>rf<\/b> builds random forests. <b>goss<\/b> (Gradient-based One-Side Sampling) keeps large-grad samples and subsamples small-grad ones to speed training.<\/p>\n<pre><code class=\"mono\">params[\"boosting_type\"] = \"dart\"  # try \"gbdt\", \"dart\", \"rf\", \"goss\"<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>27) DART Parameters<\/h3>\n<p>For <code>boosting_type=\"dart\"<\/code>, set <code>drop_rate<\/code>, <code>skip_drop<\/code>, and <code>max_drop<\/code>. DART can help generalization but may need more trees.<\/p>\n<pre><code class=\"mono\">params.update(boosting_type=\"dart\", drop_rate=0.1, skip_drop=0.5)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>28) GOSS &#038; EFB<\/h3>\n<p><b>GOSS<\/b> speeds training by smart sampling gradients. <b>EFB<\/b> (Exclusive Feature Bundling) packs mutually exclusive features to reduce dimensionality. Both are internal speed-ups; you mainly control via <code>boosting_type=\"goss\"<\/code> and defaults.<\/p>\n<pre><code class=\"mono\">params[\"boosting_type\"] = \"goss\"<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>29) Cross-Validation<\/h3>\n<p>Use <code>lgb.cv<\/code> or sklearn CV with early stopping to pick iterations and avoid overfitting. Remember to propagate <code>best_iteration<\/code> to final training.<\/p>\n<pre><code class=\"mono\">cv = lgb.cv(params, dtrain, nfold=5, num_boost_round=10000, early_stopping_rounds=200, seed=42)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>30) Q&amp;A \u2014 \u201cWhat\u2019s a good starting grid?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> <code>learning_rate=0.05<\/code>, <code>num_leaves=63<\/code>, <code>min_data_in_leaf=30<\/code>, <code>feature_fraction=0.8<\/code>, <code>bagging_fraction=0.8<\/code>, <code>bagging_freq=1<\/code>, <code>lambda_l2=1<\/code>, with early stopping. Then adjust leaves, min_data_in_leaf, and regularization.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 4: RANKING, GPU, INTERPRETABILITY (31\u201340) ===================== --><\/p>\n<div class=\"section-title\">Section 4 \u2014 Ranking, GPU, Interpretability &#038; Ops<\/div>\n<div class=\"card bg-green\">\n<h3>31) Learning-to-Rank (LambdaRank)<\/h3>\n<p>Set <code>objective=\"lambdarank\"<\/code> with group\/query info. Metrics: <code>ndcg<\/code>, <code>map<\/code>. Provide <code>group<\/code> sizes (queries), optional <code>label_gain<\/code>, and evaluate with NDCG@k.<\/p>\n<pre><code class=\"mono\">params = {\"objective\":\"lambdarank\",\"metric\":\"ndcg\",\"ndcg_eval_at\":[5,10]}\r\n# dtrain = lgb.Dataset(X, label=y, group=query_sizes)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-blue\">\n<h3>32) Multiclass Classification<\/h3>\n<p>Use <code>objective=\"multiclass\"<\/code> with <code>num_class<\/code>. Metric: <code>multi_logloss<\/code> or <code>multi_error<\/code>. For imbalanced classes, consider class weights.<\/p>\n<pre><code class=\"mono\">params = {\"objective\":\"multiclass\",\"num_class\":5,\"metric\":\"multi_logloss\"}<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>33) GPU Training<\/h3>\n<p>Enable GPU with <code>device=\"gpu\"<\/code>. It accelerates histogram building and split finding; benefits vary by dataset. Ensure GPU build is installed and memory is sufficient.<\/p>\n<pre><code class=\"mono\">params.update(device=\"gpu\")  # or device_type in some builds<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>34) Calibration<\/h3>\n<p>Boosted probabilities can be miscalibrated. Apply Platt scaling or isotonic regression on validation predictions to calibrate probabilities for decision-making.<\/p>\n<pre><code class=\"mono\">from sklearn.calibration import CalibratedClassifierCV\r\ncal = CalibratedClassifierCV(clf, cv=\"prefit\", method=\"isotonic\").fit(X_val, y_val)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>35) SHAP Values<\/h3>\n<p>Use SHAP to interpret feature contributions per prediction. Combine with global importance for reliable narratives. Be mindful of correlated features.<\/p>\n<pre><code class=\"mono\">import shap\r\nexplainer = shap.TreeExplainer(clf.booster_)\r\nshap_values = explainer.shap_values(X_sample)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>36) Partial Dependence &#038; ICE<\/h3>\n<p>Explore feature effects with PDP\/ICE to see marginal impact. This complements SHAP for business stakeholders.<\/p>\n<pre><code class=\"mono\">from sklearn.inspection import plot_partial_dependence  # scikit-learn<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>37) Model Saving &#038; Inference<\/h3>\n<p>Save boosters to text or binary; load later for predictions. Keep feature order\/processing identical at inference.<\/p>\n<pre><code class=\"mono\">bst.save_model(\"model.txt\")\r\nbst2 = lgb.Booster(model_file=\"model.txt\")\r\npred = bst2.predict(X_te, num_iteration=bst2.best_iteration)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange\">\n<h3>38) Onnx &#038; Portability (Tip)<\/h3>\n<p>You can export to ONNX via converters for some pipelines, but many deploy LightGBM natively (Python\/C++\/CLI). Validate parity if converting.<\/p>\n<pre><code class=\"mono\"># Use onnxmltools\/skl2onnx with care; verify predictions match within tolerance<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-indigo\">\n<h3>39) Logging &#038; Reproducibility<\/h3>\n<p>Log params, seed, data versions, and code commit. Fix <code>seed<\/code> for reproducibility (note: parallelism can still introduce tiny non-determinism). Save <code>best_iteration<\/code> and evaluation curves.<\/p>\n<pre><code class=\"mono\">params.update(seed=42, deterministic=True)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-emerald\">\n<h3>40) Q&amp;A \u2014 \u201cWhy is my validation AUC unstable?\u201d<\/h3>\n<p><span class=\"q\">Answer:<\/span> Small data, leakage, or high variance splits. Use stratified CV, larger validation sets, group\/time-aware splits, and average across folds. Fix seeds and log experiments.<\/p>\n<\/p><\/div>\n<p>      <!-- ===================== SECTION 5: RECIPES, CLI, CHECKLIST, INTERVIEW (41\u201350) ===================== --><\/p>\n<div class=\"section-title\">Section 5 \u2014 Recipes, CLI, Checklists &#038; Interview Q&amp;A<\/div>\n<div class=\"card bg-green\">\n<h3>41) Quick Binary Classification (sklearn)<\/h3>\n<p>Fast baseline with early stopping; tune leaves and regularization next.<\/p>\n<pre><code class=\"mono\">clf = LGBMClassifier(\r\n  learning_rate=0.05, n_estimators=5000, num_leaves=63,\r\n  min_data_in_leaf=30, feature_fraction=0.8, bagging_fraction=0.8, bagging_freq=1\r\n)\r\nclf.fit(X_tr, y_tr, eval_set=[(X_val,y_val)], eval_metric=\"auc\", early_stopping_rounds=200)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-blue\">\n<h3>42) Native API with Multiple Validations<\/h3>\n<p>Track several validation sets (e.g., CV folds or time slices). Early stopping triggers on the first valid set by default; set <code>first_metric_only<\/code> if needed.<\/p>\n<pre><code class=\"mono\">bst = lgb.train(params, dtrain, num_boost_round=10000,\r\n  valid_sets=[dtrain, dval1, dval2], valid_names=[\"train\",\"val1\",\"val2\"],\r\n  early_stopping_rounds=200, verbose_eval=100)<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-amber\">\n<h3>43) CLI Training<\/h3>\n<p>Use LightGBM CLI for reproducible training outside Python.<\/p>\n<pre><code class=\"mono\"># train.conf\r\ntask = train\r\nobjective = binary\r\nmetric = auc\r\ndata = train.svm\r\nvalid_data = valid.svm\r\nnum_leaves = 63\r\nlearning_rate = 0.05\r\nnum_boost_round = 10000\r\nearly_stopping_round = 200\r\n\r\n# Run\r\nlightgbm config=train.conf<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-violet\">\n<h3>44) Class Weights via sklearn<\/h3>\n<p>Use <code>class_weight<\/code> mapping for imbalanced classes. Prefer weights over naive downsampling if data is scarce.<\/p>\n<pre><code class=\"mono\">LGBMClassifier(class_weight={0:1, 1:20})<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-rose\">\n<h3>45) Threshold Selection<\/h3>\n<p>Optimize decision threshold on validation set by metric (F1, Youden\u2019s J, cost). Don\u2019t assume 0.5 is optimal for imbalanced tasks.<\/p>\n<pre><code class=\"mono\">thr = np.linspace(0,1,101)\r\nbest = max(thr, key=lambda t: f1_score(y_val, (p_val&gt;t).astype(int)))<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-cyan\">\n<h3>46) Pipeline Integration<\/h3>\n<p>Combine preprocessing + model in sklearn Pipeline. For categoricals, use encoders that emit integer categories and pass <code>categorical_feature<\/code> indexes to LGBM.<\/p>\n<pre><code class=\"mono\">from sklearn.pipeline import Pipeline\r\npipe = Pipeline([(\"model\", LGBMClassifier())])<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-lime\">\n<h3>47) Drift Monitoring<\/h3>\n<p>Track feature distributions and outcome rates. Recalibrate thresholds or retrain when drift detected. Keep a shadow model to compare.<\/p>\n<pre><code class=\"mono\"># Log KS-statistics\/PSI per feature over time<\/code><\/pre>\n<\/p><\/div>\n<div class=\"card bg-orange tight\">\n<h3>48) Production Checklist<\/h3>\n<ul>\n<li>Version data, code, params, and model<\/li>\n<li>Fix seed, log evals &amp; best_iteration<\/li>\n<li>Validate on time\/group splits<\/li>\n<li>Calibrate probs if needed<\/li>\n<li>Monitor drift, latency, errors<\/li>\n<li>Have rollback model &amp; thresholds<\/li>\n<\/ul><\/div>\n<div class=\"card bg-indigo\">\n<h3>49) Common Pitfalls<\/h3>\n<p>Overfitting from large <code>num_leaves<\/code>, mixing one-hot + categorical flags incorrectly, leakage in target\/freq encoding, using <code>is_unbalance<\/code> with class weights simultaneously, misaligned feature order at inference, ignoring group\/time splits.<\/p>\n<\/p><\/div>\n<div class=\"card bg-emerald qa\">\n<h3>50) Interview Q&amp;A \u2014 20 Practical Questions (Expanded)<\/h3>\n<p><b>1) Why LightGBM fast?<\/b> Histogram-based splits + leaf-wise growth + efficient sampling (EFB\/GOSS) reduce computations and memory.<\/p>\n<p><b>2) Leaf-wise vs level-wise?<\/b> Leaf-wise expands the highest-gain leaf first (can fit complex patterns); level-wise grows evenly. Leaf-wise risks overfitting without constraints.<\/p>\n<p><b>3) Role of num_leaves?<\/b> Controls complexity; too high overfits, too low underfits. Tune with <code>min_data_in_leaf<\/code> and regularization.<\/p>\n<p><b>4) max_depth usage?<\/b> Cap depth to limit tree size (esp. with high <code>num_leaves<\/code>) or leave at -1 and rely on other constraints.<\/p>\n<p><b>5) Imbalanced data approach?<\/b> Use <code>is_unbalance<\/code> or class weights, not both. Also tune thresholds, use AUC\/PR metrics, and calibrate.<\/p>\n<p><b>6) Why early stopping?<\/b> Prevents overfitting and selects <code>best_iteration<\/code> automatically, improving generalization.<\/p>\n<p><b>7) Categorical handling?<\/b> Integer-encode + specify <code>categorical_feature<\/code>; LightGBM finds optimal category splits without one-hot.<\/p>\n<p><b>8) When use DART?<\/b> To reduce overfitting via tree dropout. Expect more iterations; validate gains.<\/p>\n<p><b>9) GOSS benefit?<\/b> Speeds training by focusing on large-gradient samples while keeping overall gradient unbiased.<\/p>\n<p><b>10) feature_fraction vs bagging_fraction?<\/b> Feature subsampling reduces feature correlation\/overfit; bagging subsamples rows per iteration.<\/p>\n<p><b>11) L1 vs L2 regularization?<\/b> L1 encourages sparsity in leaf weights; L2 smooths weights. L2 is a common default.<\/p>\n<p><b>12) Monotone constraints use case?<\/b> Enforce domain monotonicity (pricing, risk). Helps trust &amp; compliance.<\/p>\n<p><b>13) Ranking setup?<\/b> Use <code>lambdarank<\/code> with group\/query sizes; evaluate <code>ndcg@k<\/code>. Ensure no leakage across groups.<\/p>\n<p><b>14) Probability calibration?<\/b> Use isotonic\/Platt on validation outputs for better decision thresholds.<\/p>\n<p><b>15) Importance types?<\/b> <code>gain<\/code> (split gain sum) is preferred; <code>split<\/code> counts frequency. Use SHAP\/permutation for robustness.<\/p>\n<p><b>16) Handling leakage in encodings?<\/b> Compute encodings within CV folds; never use global target stats that \u201cpeek\u201d at validation\/test.<\/p>\n<p><b>17) GPU worth it?<\/b> Helps on large, wide datasets; speedups vary. Validate memory usage and parity.<\/p>\n<p><b>18) Time series best practice?<\/b> Time-based split, lag features, rolling stats within train windows, no shuffling.<\/p>\n<p><b>19) Reproducibility?<\/b> Fix seeds, log everything, pin versions; accept small nondeterminism from parallel ops.<\/p>\n<p><b>20) Deployment gotchas?<\/b> Preserve feature order\/types, same preprocessing, same categorical mapping, use <code>best_iteration<\/code> at inference.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>LightGBM Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 20+ Interview Q&amp;A included Section 1 \u2014 Foundations 1) What is LightGBM? LightGBM (by Microsoft) <span class=\"readmore\"><a href=\"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/\">Read More &#8230;<\/a><\/span><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2443,2462],"tags":[],"class_list":["post-4431","post","type-post","status-publish","format-standard","hentry","category-lightgbm","category-pocket-book"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LightGBM Pocket Book | Uplatz Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LightGBM Pocket Book | Uplatz Blog\" \/>\n<meta property=\"og:description\" content=\"LightGBM Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 20+ Interview Q&amp;A included Section 1 \u2014 Foundations 1) What is LightGBM? LightGBM (by Microsoft) Read More ...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/\" \/>\n<meta property=\"og:site_name\" content=\"Uplatz Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Uplatz-1077816825610769\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-09T13:45:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-08-09T13:59:24+00:00\" \/>\n<meta name=\"author\" content=\"uplatzblog\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@uplatz_global\" \/>\n<meta name=\"twitter:site\" content=\"@uplatz_global\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"uplatzblog\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/\"},\"author\":{\"name\":\"uplatzblog\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/person\\\/8ecae69a21d0757bdb2f776e67d2645e\"},\"headline\":\"LightGBM Pocket Book\",\"datePublished\":\"2025-08-09T13:45:23+00:00\",\"dateModified\":\"2025-08-09T13:59:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/\"},\"wordCount\":1616,\"publisher\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\"},\"articleSection\":[\"LightGBM\",\"Pocket Book\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/\",\"name\":\"LightGBM Pocket Book | Uplatz Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#website\"},\"datePublished\":\"2025-08-09T13:45:23+00:00\",\"dateModified\":\"2025-08-09T13:59:24+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/lightgbm-pocket-book\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LightGBM Pocket Book\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\",\"name\":\"Uplatz Blog\",\"description\":\"Uplatz is a global IT Training &amp; Consulting company\",\"publisher\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#organization\",\"name\":\"uplatz.com\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/11\\\/Uplatz-Logo-Copy-2.png\",\"contentUrl\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/wp-content\\\/uploads\\\/2016\\\/11\\\/Uplatz-Logo-Copy-2.png\",\"width\":1280,\"height\":800,\"caption\":\"uplatz.com\"},\"image\":{\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/Uplatz-1077816825610769\\\/\",\"https:\\\/\\\/x.com\\\/uplatz_global\",\"https:\\\/\\\/www.instagram.com\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/7956715?trk=tyah&amp;amp;amp;amp;trkInfo=clickedVertical:company,clickedEntityId:7956715,idx:1-1-1,tarId:1464353969447,tas:uplatz\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/uplatz.com\\\/blog\\\/#\\\/schema\\\/person\\\/8ecae69a21d0757bdb2f776e67d2645e\",\"name\":\"uplatzblog\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g\",\"caption\":\"uplatzblog\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LightGBM Pocket Book | Uplatz Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/","og_locale":"en_US","og_type":"article","og_title":"LightGBM Pocket Book | Uplatz Blog","og_description":"LightGBM Pocket Book \u2014 Uplatz 50 in-depth cards \u2022 Wide layout \u2022 Readable examples \u2022 20+ Interview Q&amp;A included Section 1 \u2014 Foundations 1) What is LightGBM? LightGBM (by Microsoft) Read More ...","og_url":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/","og_site_name":"Uplatz Blog","article_publisher":"https:\/\/www.facebook.com\/Uplatz-1077816825610769\/","article_published_time":"2025-08-09T13:45:23+00:00","article_modified_time":"2025-08-09T13:59:24+00:00","author":"uplatzblog","twitter_card":"summary_large_image","twitter_creator":"@uplatz_global","twitter_site":"@uplatz_global","twitter_misc":{"Written by":"uplatzblog","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/#article","isPartOf":{"@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/"},"author":{"name":"uplatzblog","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/person\/8ecae69a21d0757bdb2f776e67d2645e"},"headline":"LightGBM Pocket Book","datePublished":"2025-08-09T13:45:23+00:00","dateModified":"2025-08-09T13:59:24+00:00","mainEntityOfPage":{"@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/"},"wordCount":1616,"publisher":{"@id":"https:\/\/uplatz.com\/blog\/#organization"},"articleSection":["LightGBM","Pocket Book"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/","url":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/","name":"LightGBM Pocket Book | Uplatz Blog","isPartOf":{"@id":"https:\/\/uplatz.com\/blog\/#website"},"datePublished":"2025-08-09T13:45:23+00:00","dateModified":"2025-08-09T13:59:24+00:00","breadcrumb":{"@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/uplatz.com\/blog\/lightgbm-pocket-book\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/uplatz.com\/blog\/"},{"@type":"ListItem","position":2,"name":"LightGBM Pocket Book"}]},{"@type":"WebSite","@id":"https:\/\/uplatz.com\/blog\/#website","url":"https:\/\/uplatz.com\/blog\/","name":"Uplatz Blog","description":"Uplatz is a global IT Training &amp; Consulting company","publisher":{"@id":"https:\/\/uplatz.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/uplatz.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/uplatz.com\/blog\/#organization","name":"uplatz.com","url":"https:\/\/uplatz.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/uplatz.com\/blog\/wp-content\/uploads\/2016\/11\/Uplatz-Logo-Copy-2.png","contentUrl":"https:\/\/uplatz.com\/blog\/wp-content\/uploads\/2016\/11\/Uplatz-Logo-Copy-2.png","width":1280,"height":800,"caption":"uplatz.com"},"image":{"@id":"https:\/\/uplatz.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/Uplatz-1077816825610769\/","https:\/\/x.com\/uplatz_global","https:\/\/www.instagram.com\/","https:\/\/www.linkedin.com\/company\/7956715?trk=tyah&amp;amp;amp;amp;trkInfo=clickedVertical:company,clickedEntityId:7956715,idx:1-1-1,tarId:1464353969447,tas:uplatz"]},{"@type":"Person","@id":"https:\/\/uplatz.com\/blog\/#\/schema\/person\/8ecae69a21d0757bdb2f776e67d2645e","name":"uplatzblog","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7f814c72279199f59ded4418a8653ad15f5f8904ac75e025a4e2abe24d58fa5d?s=96&d=mm&r=g","caption":"uplatzblog"}}]}},"_links":{"self":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4431","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/comments?post=4431"}],"version-history":[{"count":3,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4431\/revisions"}],"predecessor-version":[{"id":4443,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/posts\/4431\/revisions\/4443"}],"wp:attachment":[{"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/media?parent=4431"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/categories?post=4431"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/uplatz.com\/blog\/wp-json\/wp\/v2\/tags?post=4431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}