Checkmarx Pocket Book

Checkmarx Pocket Book — Uplatz

50 deep-dive flashcards • Wide layout • Fewer scrolls • 20+ Interview Q&A • Readable code examples

Section 1 — Fundamentals

1) What is Checkmarx?

Checkmarx is an Application Security Testing (AST) platform that helps teams find and remediate vulnerabilities across source code, open‑source dependencies, IaC templates, and more. It supports shift‑left security with developer‑centric tooling, CI/CD integrations, and policies that gate builds. Common use cases: securing microservices, mobile apps, cloud‑native stacks, and regulated workloads that require continuous scanning and reporting.

# Quick feel: run the AST CLI (conceptual)
checkmarx ast scan --project MyApp --src ./ --branch main

2) Why Checkmarx? Core Strengths & Tradeoffs

Strengths: Broad scanner coverage (SAST, SCA, IaC, secrets), enterprise‑grade governance, and deep CI/CD integrations. Developer features (PR feedback, incremental scans, training) reduce friction. Tradeoffs: Scans require tuning/presets to control noise, and advanced rules/policies need security expertise. Start with organization presets and iterate with audit feedback to balance signal and speed.

# Create a security gate idea (pseudo)
Policy: fail build if severity ≥ High and confidence ≥ Medium and new issues > 0

3) AST: Mental Model

AST groups complementary analyzers: SAST for data‑/control‑flow in code, SCA for vulnerable dependencies & licenses, IaC for misconfigurations, and optional dynamic/API checks. Treat each as a signal; combine them with policies and risk acceptance. Optimize for developer loop time (fast incremental scans) and nightly/full scans for thorough coverage.

# Typical cadence
- PR: quick SAST/SCA + policy gate
- Nightly: full SAST + IaC
- Weekly: dependency updates + license review

4) Platform Components

Core pieces often include: SAST engine, SCA (OSS risk), IaC scanning for Terraform/Cloud templates, secrets detection, a unified dashboard, and APIs/CLI for automation. Many orgs also use developer training modules that map lessons to detected CWEs, helping teams remediate effectively.

# CLI targets (illustrative)
checkmarx projects list
checkmarx ast results --project MyApp --latest

5) Checkmarx vs Other Tools

Compared to single‑purpose tools, a unified AST platform centralizes policy, reporting, and governance while still offering language coverage and CI hooks. Alternatives may excel in a niche (e.g., dependency risk only). Many enterprises pair a platform like Checkmarx with additional signals (DAST, container scan) and feed results to central risk dashboards.

-- Goal: a single source of truth for vulns
-- Inputs: SAST, SCA, IaC, Secrets → Policy → Build gate

6) Licensing & Deployment Options

Supports SaaS and self‑managed deployments. Choose based on data residency, network isolation, and ops capacity. For self‑managed, plan for scanners, API, queue, and storage resources, and integrate SSO and logging from day one.

# Self‑managed planning (conceptual)
- SSO (OIDC/SAML)
- Private runners/agents
- Log shipping to SIEM

7) Projects, Presets & Branches

Group repositories into projects, assign language presets (rulesets), and scan per branch. Start with vendor presets, then tune false positives and performance by enabling/disabling queries. Keep presets versioned and reviewed by AppSec.

# Pseudo commands
checkmarx projects create --name MyApp
checkmarx presets clone --from default --name my‑preset

8) Releases & Compatibility

Lock your CLI/scanner versions in CI to ensure reproducible results. Validate upgrades in a staging pipeline, compare baseline findings, and then roll out org‑wide. Keep language plugins up to date for new frameworks and sink sources.

# Pin with a container image
security/checkmarx‑cli:1.x
# or download versioned binaries in CI

9) Organizations, Teams & RBAC

Map teams to repositories/services and control access via RBAC. Limit who can change presets/policies; allow developers to triage findings on their projects. Use SSO groups to auto‑provision roles and audit changes.

Role examples: OrgAdmin, ProjectAdmin, Developer, Viewer

10) Q&A — “How do I balance speed vs accuracy?”

Answer: Use incremental scans and PR checks for fast feedback, then schedule full scans off the critical path. Tune presets by language, gate only on new High/Critical issues, and surface developer education links to reduce noise over time.

Section 2 — Core Scanners & Rules

11) SAST Basics

Static Application Security Testing reasons about source/bytecode to identify flow‑based issues like SQLi, XSS, SSRF, path traversal, and insecure deserialization. It models sources, sanitizers, and sinks. Results include CWE, severity, confidence, and trace.

# CLI (illustrative)
checkmarx ast sast --project MyApp --src . --language java

12) SCA (Open‑Source Risk)

Software Composition Analysis detects vulnerable dependencies and license risks. It parses manifests/lockfiles, maps versions to known advisories, and proposes upgrades or temporary suppressions. Pair with renovate/dependabot to auto‑remediate.

checkmarx ast sca --project MyApp --src .
# Focus on new vulns introduced by your PR

13) IaC & Cloud Config

Scan Terraform, CloudFormation, Kubernetes YAML, and similar for misconfigurations (public buckets, open security groups, privilege escalation). Shift‑left: scan PRs before merge and enforce baselines per environment.

checkmarx ast iac --path infra/
-- Example rule: deny 0.0.0.0/0 on ingress

14) API & Web App Checks

Augment static checks with API specification validation (e.g., attack surface from OpenAPI) and dynamic tests where appropriate. Use static findings to prioritize endpoints and enforce authentication/authorization patterns in code.

# Concept: validate OpenAPI
checkmarx ast api --spec api.yaml --project MyApp

15) Secrets Detection

Identify hardcoded credentials, tokens, and keys in code and history. Block secrets at PR time, rotate exposed keys, and add pre‑commit hooks to prevent recurrence. Integrate with vaults to remove secrets from code.

checkmarx ast secrets --src .
# Add pre‑commit hook to scan staged files

16) Query Language & Rules

Security queries capture patterns of insecure flows. Start with vendor presets, then craft custom rules for frameworks, internal libraries, and company‑specific sinks/sanitizers. Version and test rules alongside application code.

# Pseudo rule idea
RULE: Java SQLi → source(req.getParameter) → sink(Statement.execute)

17) Taint Flow Modeling

Taint analysis traces untrusted data from sources to sinks through sanitizers/validators. Accurate models depend on recognizing framework methods (e.g., Spring, Express) and your helper libs. Extend models to reduce false positives and catch real issues.

source: HttpServletRequest#getParameter
sanitizer: PreparedStatement#setString
sink: Statement#execute

18) Custom Rules & FP Tuning

Suppress noisy findings with narrowly‑scoped ignores, not global disables. Prefer code fixes or sanitizer annotations over blanket suppressions. Keep an audit log explaining why each suppression is safe.

// Example inline suppression (conceptual)
// checkmarx:ignore-next-line reason="validated by schema"
execute(query)

19) Severity, Confidence & Prioritization

Prioritize by severity (impact), confidence (likelihood), exploitability, and reachability. Gate builds on new high‑risk findings to avoid legacy backlogs blocking delivery. Use ticket aging/SLA to drive remediation.

Policy example: New High/Critical = fail; Medium = warn; Low = pass

20) Q&A — “Should I enable every rule?”

Answer: No. Start with a curated baseline tuned to your languages/frameworks. Expand gradually as teams harden code. Over‑broad rulesets cause alert fatigue and increase MTTR.

Section 3 — CI/CD Integration & Dev Workflow

21) CLI & Auth

Use the vendor CLI in CI. Authenticate via token/SSO, set the project key, and configure branch context. Cache the binary and language caches to accelerate jobs.

checkmarx auth --api-url $CX_URL --tenant $TENANT --token $TOKEN
checkmarx ast scan --project MyApp --branch $CI_COMMIT_BRANCH

22) GitHub Actions

Add jobs that run on pull_request, upload SARIF/annotations, and enforce policies. Reuse composite actions across repos and pin action SHAs for supply‑chain safety.

name: security
on: [pull_request]
jobs:
  checkmarx:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
      - run: npm ci --ignore-scripts
      - run: checkmarx ast scan --project ${{ github.repository }} --src .

23) GitLab CI

Run scans in merge requests and block on policies. Use artifacts to persist results and dashboards to visualize risk by group/project.

security:
  image: security/checkmarx-cli:1
  script:
    - checkmarx ast scan --project $CI_PROJECT_PATH --src .
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'

24) Jenkins Pipelines

Install CLI on agents or use a container step. Fail stages on gated findings and publish HTML/SARIF to artifacts. Parameterize project/branch to reuse pipeline across services.

pipeline {
  stages {
    stage('SAST') {
      steps { sh 'checkmarx ast scan --project MyApp --src . --branch ${BRANCH_NAME}' }
    }
  }
}

25) Azure DevOps

Use pipelines to run scans on PRs and main. Post comments to pull requests and create work items for high‑risk issues. Store tokens in variable groups or Key Vault.

- stage: Security
  jobs:
    - job: SAST
      steps:
        - script: checkmarx ast scan --project $(Build.Repository.Name) --src .

26) Bitbucket Pipelines

Containerize the CLI and run on pull‑requests. Use build status to block merges if policies fail. Cache dependencies to speed up scans.

pipelines:
  pull-requests:
    '**':
      - step:
          image: security/checkmarx-cli:1
          script:
            - checkmarx ast scan --project $BITBUCKET_REPO_FULL_NAME --src .

27) PR Decoration & SARIF

Publish SARIF/annotations so developers see findings inline. Link to training for each CWE and suggest safe fixes or framework idioms (e.g., parameterized queries, encoding/escaping).

# Example upload step (conceptual)
checkmarx ast results --format sarif > results.sarif
upload-sarif results.sarif

28) Incremental Scanning

Scan only changed files/paths to reduce PR latency while maintaining full scans on schedules. Combine with caching and per‑language build artifacts to reach sub‑minute feedback where possible.

checkmarx ast scan --incremental --base main --head $CI_COMMIT_SHA

29) Baselines, Gates & SLAs

Set a baseline on main, gate PRs on new High/Critical issues only, and track remediation SLAs by severity. Celebrate burn‑down in dashboards and add security debt to planning.

# Policy (pseudo)
new.high > 0 → fail
new.medium > 5 → warn
license: disallow GPL‑3.0 in prod

30) Q&A — “How to prevent security from blocking delivery?”

Answer: Gate on new risk, use incremental scans, provide inline guidance, and track SLAs. Reserve stricter gates for high‑risk services while giving others a grace period with strong observability.

Section 4 — Governance, Risk & Reporting

31) Policies & Thresholds

Define organization policies: severities, confidence levels, license allow/deny lists, and environment‑specific baselines. Version them as code and enforce via CI to keep behavior consistent across repos.

policy:
  fail_on:
    - severity: HIGH
      scope: new
    - license: GPL-3.0

32) Triage & Workflow

Route findings to code owners, auto‑create tickets for High issues, and permit time‑boxed suppressions with justification. Require a second‑pair review for dismissals. Keep an audit trail for compliance.

# Ticket fields
component, CWE, file:line, severity, fix suggestion, SLA

33) Compliance Mapping

Map controls/findings to frameworks (OWASP ASVS, ISO 27001, SOC 2). Use reports showing coverage over time, remediation trends, and policy adherence. Link code reviews and training completion to demonstrate due diligence.

report --framework OWASP-ASVS --period last-90d

34) Reports & Dashboards

Dashboards should answer: What’s our risk by service? What’s new vs legacy? How fast do teams remediate? Provide filters by org/team/repo/label and export PDFs/CSV for audits.

metrics: p95 PR scan time, new vulns/week, SLA breach count

35) Ticketing & Workflow Tools

Integrate with Jira/Azure Boards. Auto‑assign based on code ownership, add labels (security, CWE‑79), and attach traces. Transition tickets when scans verify the fix on the branch that closes the issue.

jira create --project SEC --summary "CWE‑89 in OrdersService" --assignee @team

36) Notifications & Webhooks

Send high‑priority alerts to on‑call channels, but reserve noisy info for digests. Use webhooks to trigger playbooks (e.g., rotate secrets, block deploy) when certain findings appear in protected branches.

webhook: on finding where severity=Critical and branch=main → call /rotate‑key

37) Multi‑Tenant & Environments

Separate dev/stage/prod projects or tags to avoid cross‑contamination. For MSPs or large orgs, use folders/tenants to isolate access, policies, and reporting. Standardize presets per environment.

project: MyApp‑prod  | preset: strict
project: MyApp‑dev   | preset: fast‑feedback

38) REST APIs

Automate at scale: create projects, trigger scans, fetch results, and push metrics to BI. Respect rate limits and auth scopes; rotate tokens and store them securely. Prefer server‑to‑server workflows behind CI/CD.

GET /api/projects/{id}/results?format=sarif
Authorization: Bearer <token>

39) Data & Privacy

Decide whether source code leaves your network (SaaS) or stays on‑prem. For sensitive repos, use self‑hosted scanners and outbound‑restricted egress. Scrub PII from logs and respect legal hold requirements for exports.

logging: redact tokens, emails, secrets; set retention 90d

40) Q&A — “How do I measure program health?”

Answer: Track new High findings per KLOC, median remediation time, PR scan latency, SLA adherence, and % repos under policy. Trend them monthly and review in engineering leadership forums.

Section 5 — Security, Testing, Deployment, Observability & Interview Q&A

41) Secure Deployment

For self‑managed: isolate scanners and API components, encrypt traffic, restrict admin consoles, and back up configuration and results. Use infrastructure‑as‑code and immutable images for repeatability.

network:
  ingress: allow CI, SCM
  egress: allow updates, block internet by default

42) Access Control & SSO

Integrate with SSO (OIDC/SAML). Map groups to roles, enforce MFA, and require device trust for admin access. Enable audit logs and review role changes regularly.

roles:
  - name: OrgAdmin
  - name: ProjectAdmin
  - name: Developer

43) Testing Strategy

Combine unit tests with secure coding checks in CI. Add security unit tests for sanitizers/validators, and regression tests for fixed CWEs. Validate that risky code paths have tests before closing tickets.

test('escapes html', () => {
  expect(escape('<script>')).toBe('&lt;script&gt;')
})

44) Developer Education

Link findings to short, framework‑specific lessons. Track completion and reduction in repeat CWEs per team. Bake secure patterns into templates and internal libraries developers can reuse.

lesson: CWE‑79 in React → use dangerouslySetInnerHTML? Avoid; prefer encoding

45) Performance & Scan Tuning

Cache dependencies, enable incremental scans on PRs, and shard monorepos into per‑service scans. Disable expensive rules irrelevant to your stack. Monitor scan duration and keep PR checks under a target (e.g., < 5 minutes).

-- Focus scans by path
--include src/,app/ --exclude dist/,node_modules/

46) Upgrades & Migrations

Test new engine versions on a canary set of repos, compare findings, and adjust presets. Communicate changes to developers and update documentation/screenshots. Keep rollback plans for critical pipelines.

canary_repos: [ payments, accounts, checkout ]

47) Observability

Ship scanner metrics (duration, queue time, errors) and platform logs to your observability stack. Set SLOs for PR latency and error budgets. Alert on spikes in failures or missing results uploads.

metrics: pr_scan_seconds, scans_failed_total, findings_new_high_total

48) Prod Checklist

  • Policies as code, versioned and reviewed
  • PR gates on new High/Critical findings
  • SSO + RBAC + audit logging
  • Secrets scanning and key rotation
  • Regular engine upgrades and canaries
  • Dashboards for MTTR and SLA

49) Common Pitfalls

Enabling too many rules at once, gating on legacy debt, ignoring IaC/secret risk, and leaving policies undocumented. Fix by iterating presets, gating only on new risk, and educating developers with concise tips.

50) Interview Q&A — 20 Practical Questions (Expanded)

1) Why an AST platform? Unified policy, reporting, and developer workflow across SAST/SCA/IaC.

2) SAST vs SCA? Code flows vs dependency advisories/licenses; both needed.

3) Gate strategy? Block on new High/Critical; warn on Medium; report Low.

4) Reduce false positives? Tune presets, model sanitizers, and add targeted suppressions.

5) Handle secrets? Scan PRs and history; rotate exposed keys; add pre‑commit hooks.

6) IaC priorities? Network exposure, identity permissions, storage encryption.

7) Incremental vs full? Incremental for PR speed; scheduled full for coverage.

8) Custom rules? Capture internal frameworks and sinks; version with code.

9) Data residency? Choose SaaS vs self‑managed per compliance needs.

10) License compliance? Enforce allow/deny and generate SBOMs.

11) Developer buy‑in? Inline annotations, fast feedback, and training links.

12) Prioritization? Severity × confidence × reachability × asset criticality.

13) Monorepo tips? Path filters, per‑service projects, caching.

14) Measuring success? MTTR, new high findings, % repos gated, PR latency.

15) Onboarding new repos? Baseline scan, set policies, assign owners.

16) Handling legacy debt? Baseline + backlog epics + time‑boxed remediation.

17) API usage? Automate scans, fetch SARIF, push to BI dashboards.

18) Secrets sprawl? Move to vaults and short‑lived tokens.

19) SLAs? Critical: 24–72h, High: 7d, Medium: 30d (example targets).

20) When not to block? Low‑risk PRs on non‑critical services; log and monitor.