Node.js Pocket Book — Uplatz
50 deep-dive flashcards • Wide layout • Fewer scrolls • 20+ Interview Q&A • Readable code examples
1) What is Node.js?
Node.js is a JavaScript runtime built on V8 that runs JS outside the browser. It embraces an event-driven, non-blocking I/O model so a single process can multiplex thousands of connections, excelling at APIs, real-time features, proxies, and streaming. You get one language across client and server, plus the npm ecosystem. Typical sweet spots: BFF (Backend for Frontend), API gateways, chat/dashboards, task runners, and CLIs. Less ideal: CPU-bound workloads (video encoding, big crypto) unless you use worker_threads
or delegate to services. For production, prefer LTS versions and keep dependencies tidy.
# Check your Node & npm
node -v
npm -v
2) Why Node.js? Core Strengths & Tradeoffs
Strengths: speed of delivery, huge package ecosystem, JSON-native APIs, and real-time capabilities with minimal boilerplate. The single-threaded model simplifies concurrency versus managing many threads. Tradeoffs: CPU-heavy work can block the event loop, dependency sprawl requires governance, and callback-based code (legacy) can get messy. Mitigate with async/await
, schema validation, and clear architectural boundaries. Node thrives when most time is I/O (DB, network) rather than CPU.
# Create a project quickly
npm init -y
npm i fastify
3) Event Loop: Mental Model
The event loop runs phases: timers → pending callbacks → idle/prepare → poll → check → close callbacks. Promise microtasks run between turns and after each callback. Use this to reason about ordering: Promise.then
fires before setImmediate
, which fires after poll
. Practical tip: prefer promises and avoid heavy synchronous work on the main thread.
setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
Promise.resolve().then(() => console.log('microtask'));
Typical order: microtask → timeout/immediate (depends on scheduling).
4) libuv & Threadpool
libuv powers the event loop and a small threadpool (default 4) to offload certain I/O (file system, DNS, compression, some crypto). You can tune with UV_THREADPOOL_SIZE
(up to 128), but profile first. Long-running CPU tasks should use worker_threads
.
import { createReadStream } from 'node:fs';
import crypto from 'node:crypto';
const hash = crypto.createHash('sha256');
createReadStream('big.bin').pipe(hash).on('finish', () => {
console.log(hash.read().toString('hex'));
});
5) Node vs Browser JS
Both share ECMAScript, but available APIs differ. Browsers provide DOM, storage, and UI; Node exposes system modules: fs
, net
, http
, cluster
. Node has no DOM; use libraries (e.g., JSDOM) if you must parse HTML server-side. With Node ≥18, fetch
is global, narrowing gaps. Security models differ: browser sandbox vs server trust boundary; always validate/authorize server inputs.
// Node ≥18 global fetch
const res = await fetch('https://api.example.com');
const data = await res.json();
6) npm, pnpm, Yarn
npm is default and ubiquitous. pnpm uses content-addressable storage; it’s fast and space-efficient for monorepos. Yarn is common in older stacks. Always commit lockfiles for reproducible builds. Helpful scripts:
"scripts": {
"dev": "node --watch src/index.js",
"build": "tsc -p tsconfig.json",
"test": "node --test",
"start": "node dist/index.js"
}
7) package.json & Exports
Key fields: type
(module
for ESM), main
/exports
, engines
, scripts
, dependencies
. For libraries, the exports
map controls entry points for ESM/CJS. Prefer ESM in 2025; publish dual only if consumers need CJS.
{
"type":"module",
"exports": {
".": { "import":"./dist/index.mjs", "require":"./dist/index.cjs" }
}
}
8) LTS vs Current
Use LTS in production for stability, security patches, and ecosystem compatibility. Track Current in CI to catch upcoming changes. Pin Node version via .nvmrc
or container images and enforce with engines
.
# Pin in project root
echo "lts/*" > .nvmrc
9) nvm & Multi-Version Dev
Install multiple Node versions and switch per project; align with "engines"
. Team tip: add .nvmrc
and document setup. CI should mirror the specified version.
nvm install --lts
nvm use --lts
10) Q&A — “How is Node concurrent if it’s single-threaded?”
Answer: JavaScript runs on one thread, but I/O is asynchronous. The event loop and libuv handle non-blocking operations; many requests can wait on I/O simultaneously. When I/O completes, callbacks/microtasks are queued. The result is high concurrency without thread-per-request overhead. For CPU-heavy tasks, use worker_threads
or separate services so the loop stays responsive.
11) Modules: ESM vs CJS
ESM (import/export
) is the modern standard; CJS (require/module.exports
) remains in legacy code. ESM has strict file extensions/resolution and top-level await
. For Node apps, set "type":"module"
and convert gradually. For interop, use dynamic import()
from CJS, or createRequire
from ESM.
// esm.mjs
import os from 'node:os';
export const platform = os.platform();
12) fs & path Essentials
Prefer promise APIs and streaming for large files. Always handle ENOENT/permissions and avoid sync calls in servers.
import { readFile, writeFile } from 'node:fs/promises';
import { join } from 'node:path';
const p = join(process.cwd(),'data.json');
await writeFile(p, JSON.stringify({ok:true}));
const raw = await readFile(p,'utf8');
13) HTTP/HTTPS Servers
Low-level servers give full control but require boilerplate (parsing, routing). In production, frameworks improve productivity, validation, and error handling. Always set sensible timeouts.
import http from 'node:http';
const s = http.createServer((req,res) => {
res.setHeader('content-type','application/json');
res.end(JSON.stringify({ path:req.url }));
});
s.listen(3000);
14) Streams & Backpressure
Streams process data incrementally, reducing memory pressure. Use stream/promises.pipeline
to connect safely and respect backpressure.
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';
await pipeline(createReadStream('in.bin'), createWriteStream('out.bin'));
15) Buffer & Binary Data
Buffer
represents raw bytes. Be mindful of encodings and size. Avoid constructing huge buffers; stream instead.
const b = Buffer.from('hello','utf8');
console.log(b.toString('hex'));
16) Crypto Basics
Use modern algorithms and safe defaults; for passwords, use bcrypt/argon2 (not plain hashes). Prefer AEAD modes (AES-GCM) for encryption. Don’t roll your own crypto.
import crypto from 'node:crypto';
const hash = crypto.createHash('sha256').update('data').digest('hex');
17) Events & EventEmitter
Emitters enable decoupled modules. Clean up listeners, handle 'error'
, and avoid memory leaks.
import { EventEmitter } from 'node:events';
const bus = new EventEmitter();
bus.on('tick', (n) => console.log('tick', n));
setInterval(() => bus.emit('tick', Date.now()), 1000);
18) Timers & Scheduling
setTimeout
, setInterval
, setImmediate
schedule work; clear on shutdown. For cron-like tasks, use external schedulers or job queues.
const id = setInterval(() => console.log('beat'), 1000);
setTimeout(() => clearInterval(id), 5000);
19) process & Env Config
Validate env at startup (zod/joi). Handle signals for graceful shutdown. Catch unhandled rejections/exceptions to log, then exit (don’t continue in a corrupted state).
process.on('SIGTERM', () => { /* stop accepting, drain, exit */ });
process.on('unhandledRejection', (e) => { console.error(e); process.exit(1); });
20) Q&A — “ESM vs CJS: what should I choose?”
Answer: Prefer ESM for new apps/packages; it’s standard, supports top-level await
, and aligns with browsers. If consumers need CJS, publish dual exports via exports
map. Avoid runtime module-format hacks; keep builds simple. Use dynamic import()
or createRequire
for interop.
21) From Callbacks to Promises
Legacy Node used error-first callbacks ((err, data) => {}
). Modern code should adopt promises and async/await
for readability and error handling. Convert with util.promisify
when needed. Ensure all async paths are awaited to avoid “floating” promises and unhandled rejections.
import { promisify } from 'node:util';
import { readFile } from 'node:fs';
const read = promisify(readFile);
const txt = await read('file.txt','utf8');
22) async/await Best Practices
Group related awaits in a single try/catch
, fail fast with explicit messages, and use Promise.all
for independent tasks. For best-effort tasks, use allSettled
.
const [user, orders] = await Promise.all([getUser(id), listOrders(id)]);
if (!user) throw new Error('User not found');
23) Microtasks vs Macrotasks
Promises/microtasks run before the next event loop phase. Timers/IO callbacks are macrotasks. Ordering matters for tests and subtle race conditions.
queueMicrotask(() => console.log('microtask'));
setImmediate(() => console.log('immediate'));
24) nextTick vs queueMicrotask
process.nextTick
runs before the microtask queue and can starve I/O if abused; queueMicrotask
follows standard microtask semantics and is safer for most cases. Prefer queueMicrotask
in libs to avoid priority surprises.
process.nextTick(() => {/* use sparingly */});
queueMicrotask(() => {/* preferred microtask */});
25) Worker Threads
Use workers for CPU-bound tasks (image resize, compression, crypto, ML inference). Communicate via parentPort.postMessage
or MessageChannel
. Keep messages small or share memory via SharedArrayBuffer
. Pool workers for throughput.
import { Worker } from 'node:worker_threads';
const w = new Worker(new URL('./heavy.js', import.meta.url));
w.postMessage({ payload: 'data' });
26) Cluster & Multi-Process
cluster
forks workers to utilize multi-core CPUs. Good for HTTP servers; combine with PM2/systemd. Consider sticky sessions for WebSockets. In containers, prefer one worker per container and scale horizontally via orchestrator.
import cluster from 'node:cluster';
import os from 'node:os';
if (cluster.isPrimary) os.cpus().forEach(() => cluster.fork());
else server.listen(3000);
27) Streams: Backpressure in Practice
When write()
returns false, pause reads until 'drain'
. Using pipeline
handles this automatically. Backpressure prevents memory spikes and smooths throughput under load.
const ok = writable.write(chunk);
if (!ok) readable.pause();
writable.once('drain', () => readable.resume());
28) Queues, Jobs & Rate Limits
Offload slow/retryable tasks via BullMQ/RabbitMQ/Kafka. Implement exponential backoff with jitter, dead-letter queues, and idempotency keys. Job queues decouple latency from request handling.
import { Queue } from 'bullmq';
const q = new Queue('emails',{ connection:{ host:'127.0.0.1', port:6379 }});
await q.add('welcome',{ userId:'u1' });
29) Timeouts, Retries & Circuit Breakers
Set outbound call timeouts by default, retry carefully (idempotent ops), and trip a circuit breaker on sustained failures. Expose breaker state for ops and dashboards.
const controller = new AbortController();
setTimeout(() => controller.abort(), 2000);
30) Q&A — “When should I use workers vs cluster?”
Answer: Use workers for CPU-bound tasks within a single process to avoid blocking the loop. Use cluster to scale an HTTP server across cores (multi-process). They complement each other: a clustered web server can dispatch CPU-heavy jobs to a worker pool, keeping request latency low.
31) Express
Minimalist and battle-tested. Great ecosystem of middleware. Establish conventions for error handling, validation, and async functions (wrap with error middleware).
import express from 'express';
const app = express();
app.use(express.json());
app.get('/health', (_req,res) => res.json({ ok:true }));
app.listen(3000);
32) Fastify
Schema-first, high throughput, and excellent plugin system. Built-in AJV validation speeds safe APIs and generates OpenAPI. If performance and correctness matter, Fastify is a strong default.
import Fastify from 'fastify';
const f = Fastify();
f.get('/health', async () => ({ ok:true }));
await f.listen({ port:3000 });
33) NestJS
Opinionated framework with modules, providers, controllers, and DI — great for large teams. Encourages SOLID principles, testability, and separation of concerns.
// Nest CLI
npx @nestjs/cli new app
34) REST Design
Use nouns, proper status codes, pagination, filtering, idempotency keys, and versioning. Document with OpenAPI and keep examples current. Enforce input/output schemas.
// Fastify schema on route
f.post('/users', {
schema:{ body:{ type:'object', required:['email'], properties:{ email:{ type:'string', format:'email'} } }
}}, async (req) => ({ id:'u_123', email:req.body.email }));
35) GraphQL
Typed schema, single endpoint, client-driven queries. Control complexity/depth; batch N+1 with dataloader
. Map auth to your domain rules; consider persisted queries for caching/CDN.
const typeDefs = `type Query { hello: String }`;
const resolvers = { Query:{ hello: () => 'world' } };
36) WebSockets & Real-Time
Use ws
or Socket.IO for chat, presence, live dashboards. Plan authentication and reconnection strategies. Horizontal scaling may need sticky sessions or pub/sub (Redis, NATS).
import { WebSocketServer } from 'ws';
const wss = new WebSocketServer({ port:8080 });
wss.on('connection', ws => ws.send('hello'));
37) SQL: Prisma / Knex / TypeORM
Prisma offers type-safe client and migrations; Knex is a flexible query builder; TypeORM provides decorators and patterns. Always parameterize queries and index hot paths.
// Prisma schema snippet
model User { id String @id @default(cuid()) email String @unique createdAt DateTime @default(now()) }
38) NoSQL: MongoDB / Redis
Model read/write patterns; avoid unbounded documents. MongoDB with Mongoose adds schemas/validation; Redis excels at caching, queues, and rate limits. Use TTL keys and avoid hot keys where possible.
import { createClient } from 'redis';
const r = createClient({ url: process.env.REDIS_URL });
await r.connect();
await r.set('health','ok',{ EX:60 });
39) Caching Strategy
Layer caches: in-process (LRU), Redis, and HTTP (CDN). Use ETags/Last-Modified, cache keys with versioning, and request coalescing to prevent stampedes. Invalidate carefully on writes.
// Example HTTP cache headers
res.setHeader('Cache-Control','public, max-age=60');
res.setHeader('ETag', etag);
40) Q&A — “REST vs GraphQL in Node?”
Answer: REST is simpler, cache-friendly, and great for standard CRUD and integrations. GraphQL shines when clients need flexible shapes and multiple resources in one round-trip. In Node, Express/Fastify make REST trivial; Apollo/Nest make GraphQL ergonomic. Choose REST for broad compatibility; GraphQL for complex, client-driven UIs. You can mix: REST for public APIs, GraphQL for internal apps.
41) Security Fundamentals
Validate inputs (AJV/zod), sanitize outputs, enforce HTTPS, set security headers (helmet), rotate secrets, and restrict privileges. Regularly audit dependencies. Log authz decisions and deny by default. Never trust client-supplied IDs without authorization checks.
// Example with dotenv + validation
import 'dotenv/config';
if(!process.env.DB_URL) throw new Error('DB_URL required');
42) Auth & Sessions
Stateless APIs: short-lived JWT with rotation/blacklist on logout. Stateful apps: signed cookies backed by server store. For OAuth/OIDC, leverage libraries and validate nonce/state. Always bind sessions to device context and rotate tokens on privilege changes.
// Pseudocode: Verify JWT
const payload = verify(token, PUBLIC_KEY, { algorithms:['RS256'] });
43) Testing Strategy
Use built-in node:test
or Jest. Combine unit, integration, and contract tests. For HTTP, pair with supertest
. Mock external services sparingly; prefer ephemeral test DBs with seed data. Run coverage in CI.
import test from 'node:test';
import assert from 'node:assert/strict';
test('adds', () => assert.equal(2+2,4));
44) Linting, Formatting & Types
ESLint + Prettier keep style consistent; TypeScript catches bugs. Enforce strict TS options, define interfaces for domain models, and use tsc --noEmit
in CI to ensure type safety even in JS projects via JSDoc types.
pnpm add -D eslint prettier typescript @types/node
45) Performance & Profiling
Measure, don’t guess. Use --inspect
with DevTools, flamegraphs (Clinic), and custom timers. Fix hot paths: avoid sync APIs, batch I/O, enable HTTP keep-alive, compress wisely, and prefer Fastify if you need throughput.
node --inspect server.js
// Open chrome://inspect
46) Deployment Options
PM2 for simple VMs; Docker for portability; Kubernetes for orchestration and autoscaling; serverless (Lambda/Cloud Functions) for spiky workloads. Implement health/readiness, graceful shutdown, and env-specific configs. Keep images slim and pin Node versions.
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
CMD ["node","dist/index.js"]
47) Observability
Structured logs (pino) with request IDs, metrics (Prometheus) for latency/RPS/error rate, and tracing (OpenTelemetry) across services. Build SLOs (e.g., p99 latency < 300ms) and alert on burn rates. Expose /health
and /ready
endpoints.
import pino from 'pino';
const log = pino({ level: process.env.LOG_LEVEL || 'info' });
log.info({ event:'startup' }, 'service online');
48) Prod Checklist
- Env validation & secrets manager
- Timeouts, retries, circuit breakers
- Input validation, rate limits, headers
- Graceful shutdown & draining
- Dashboards: latency, errors, saturation
- Runbooks & on-call escalation
49) Common Pitfalls
Blocking the loop with CPU or sync fs
, unhandled promise rejections, leaking event listeners/streams, mixing CJS/ESM incorrectly, missing timeouts, and non-idempotent retries. Prevent with lint rules, tests, observability, and architectural guardrails (queues, backpressure, bounded caches).
50) Interview Q&A — 20 Practical Questions (Expanded)
1) Why Node for APIs? Non-blocking I/O, JSON-native data flow, and huge ecosystem enable rapid delivery. Great for I/O-bound workloads and microservices.
2) Single-threaded yet concurrent? Event loop multiplexes I/O. libuv handles OS operations; callbacks run when ready. Use workers/cluster for CPU and scale-out.
3) ESM vs CJS? Prefer ESM; publish dual exports if consumers require CJS. Avoid mixing without a plan; use exports
map.
4) Avoiding callback hell? Adopt promises and async/await
, modularize functions, centralize error handling, and maintain clear control flow.
5) nextTick vs queueMicrotask? nextTick
runs before other microtasks and can starve I/O; queueMicrotask
follows standard semantics — safer for libraries.
6) Preventing memory leaks? End streams, clear timers, remove listeners, bound caches, and use heap snapshots/metrics to detect growth early.
7) Backpressure handling? Use pipeline
, respect write()
return values, and pause/resume streams to match producer/consumer speeds.
8) Secure configuration? Validate env on boot, store secrets in a manager, rotate keys, and restrict permissions (least privilege).
9) Rate limiting strategy? Sliding window counters in Redis keyed by IP/user/route; return 429s and include Retry-After
.
10) Auth choices? JWT for stateless APIs (short TTL + rotation); cookies + server store for web sessions. Protect refresh flows and use HTTPS only.
11) GraphQL hardening? Complexity/depth limits, persisted queries, input validation, and dataloaders to avoid N+1.
12) When to use workers? CPU-heavy tasks (image, PDF, crypto, ML). Keep messages small; consider SharedArrayBuffer for large data.
13) Cluster vs Kubernetes scaling? Cluster uses all cores on one host; Kubernetes scales containers across nodes with health checks and autoscaling.
14) Safe retries? Retry only idempotent operations, add jitter/backoff, and use idempotency keys + DLQs to prevent duplicates.
15) Observability must-haves? Correlated request IDs, structured logs, metrics (latency histograms, RPS, errors), and distributed tracing.
16) DB performance tips? Use pools, prepared statements, proper indexes, pagination, and cache hot reads. Monitor slow queries.
17) DoS protections? Timeouts, body size limits, schema validation, WAF/CDN, circuit breakers, and rate limits per user/route.
18) Graceful shutdown? Stop accepting new requests, drain connections/queues, close DB pools, flush logs, then exit with code 0.
19) Testing pyramid? Emphasize unit, add integration for boundaries, and selective e2e. Use contract tests for inter-service APIs.
20) Framework choice? Express for minimalism, Fastify for performance/schema, Nest for large teams seeking opinionated structure.