What is POJOs?

What is POJO? POJO stands for “Plain Old Java Object.” It is a term used to describe a Java object that is not “polluted” by any special requirements from a framework or a library. It is a simple Java object that does not have any special restrictions or requirements. In the context of Clean Architecture and DDD, a POJO is an object that: Doesn’t extend a specific class (e.g., extends HttpServlet or extends BaseEntity). Doesn’t implement a specific interface from a library (e.g., implements Serializable is often okay, but implements org.hibernate.Entity is not). Doesn’t use “heavy” annotations that tie it to a technology (like @Entity from JPA or @Table from SQL). Why use POJOs in your Domain (Domain Driven Diagram)? Technology Independence: Your business logic doesn’t care if you use PostgreSQL, MongoDB, or even if you’re running on a Web Server or a CLI. Easier Testing: You can test a POJO with simple Unit Tests without needing to start a Spring Context or a database. Future Proofing: If you decide to switch from JPA to another database technology later, your core business logic (the POJOs) won’t have to change at all. Example comparison: POJO (Clean/DDD style): public class User { private String username; // Pure Java logic } NOT a POJO (Infrastructure-coupled style): @Entity // Tied to JPA @Table(name = "users") // Tied to SQL public class User { @Id // Tied to JPA private Long id; } Notes: By keeping your domain.entities as POJOs, you are ensuring that your “Business Truth” remains pure and independent! ...

May 2, 2026

Flash Sale with Limited Tickets

Problem Statements Limited inventory: Exactly 200 tickets, no overselling. One ticket per user: Strictly enforce (assuming logged-in users only). Handle high concurrency: 20k+ users rushing at sale start, avoid server overload/high peak load. Minimize database stress: Avoid hammering the persistent DB during the peak. Typical Solutions Fitting Your Node.js Setup To meet your requirements: Use Redis as the primary hotspot for inventory and user checks (in-memory, distributed via clustering). Pre-load remaining tickets as a counter (e.g., atomic DECR). Use a Set for sold users (check/add atomically). Best: Redis Lua script (or Redis function in v7+) for atomicity: check stock >0, check user not bought, decrement stock, add user. Enforce one per user via user ID in a Redis Set. Handle concurrency: Virtual queue (e.g., Redis list or separate service like RabbitMQ/Kafka) or waiting room to throttle ingress. Offload DB: Successful attempts go to a message queue (e.g., Kafka/RabbitMQ) for async persistence; failures reject immediately. Node.js scaling: Cluster with PM2, share Redis, use worker threads for non-blocking I/O. This pattern prevents overselling reliably while keeping peak DB hits near zero. ...

December 20, 2025

Terraform take note

How Terraform modules and variables interact is spot on. 1. The “Contract” (The Module): Inside the ../../modules/app-common-infra folder, there are variable “xxx” {} blocks. These act like a function signature in programming. They define: What information is required (e.g., vpc_cidr, resource_prefix). The data type (string, list, map). Any default values. 2. The “Implementation” (The .tf files inside the modules): Files like iam.tf, lambda.tf, and lb.tf inside that module folder don’t use hardcoded values. Instead, they use var.xxx. Example: If iam.tf needs to name a role, it might use name = "${var.resource_prefix}-role". This makes the module reusable because it doesn’t care what the prefix is until you tell it. ...

March 9, 2025

Take note something when implementing MFA using AWS Cognito Service

1. Authentication Flow Comparison: 1.1. Scenario 1: Login with MFA enabled User → Password → MFA Challenge → TOTP Code → Access Token Token proves: Password + MFA (2-factor authentication) Security level: HIGH 1.2. Scenario 2: Associate Software Token (MFA not enabled yet) User → Password → Access Token (no MFA challenge) Token proves: Password only (1-factor authentication) Security level: MEDIUM => Answer: Technically same permissions, different authentication strength From AWS Cognito’s perspective: Both tokens grant the same API permissions (user operations) Both are valid access tokens for the authenticated user Both can call associateSoftwareToken, setSoftwareTokenMfaPreference, etc. BUT - The authentication strength differs: ✅ With MFA: User proved identity with 2 factors ⚠️ Without MFA: User proved identity with 1 factor only 2. Storing AWS Access Token in Session ✅ Benefits Better UX: Don’t re-authenticate for every operation Follows OAuth2 pattern: Store token, reuse for API calls Enables your use cases: Can disable MFA, associate token, etc. Already secured: With httpOnly + sameSite, much safer ⚠️ Critical Considerations Issue 1: Token Expiration Mismatch # AWS Cognito tokens AWS access token lifetime = 3600000 # 1 hour (default) => you should setup your app own session maxAge same value ...

December 6, 2025

How to Optimize a System for 1 Million Concurrent Users

To keep your system running smoothly when millions of users access it at the same time, a Software Architect needs to consider many factors. Below is a comprehensive checklist of bottlenecks and optimization solutions to ensure your system is always ready for high traffic. 1. Bottleneck from monolith architecture All logic and resources are bundled together → difficult to scale Solutions: Switch to microservices Make services stateless to allow horizontal scaling Add an API Gateway (rate-limiting, circuit breaker) Use a service mesh (Istio, Linkerd) if observability is needed 2. DB bottleneck due to too many direct queries 1 million users can generate tens of millions of DB queries Solutions: ...

April 16, 2025

How to optimize a Spring Boot Application to Handle 1M Requests/Second

Scaling a Spring Boot application to handle 1 million requests per second might sound like an impossible feat, but with the right strategies, it’s absolutely achievable. Here’s how I did it: 1. Understand Your Bottlenecks Before optimizing, I conducted a thorough performance analysis using tools like JProfiler and New Relic. This helped identify key issues: High response times for certain APIs. Database queries taking too long. Thread contention in critical parts of the application. ...

February 20, 2025

How to set Git config username and email for a specific project?

You can set a different Git username and email for a specific project by configuring them locally within the project’s repository. Here’s how: 1️⃣ Navigate to Your Project Folder cd /path/to/your/hugo-blog 2️⃣ Set Local Git Username & Email Run the following commands inside your project folder: git config user.name "Your Name" git config user.email "your-email@example.com" 3️⃣ Verify the Configuration To check if it’s set correctly: git config --local user.name git config --local user.email This will apply only to this repository, leaving your global Git settings untouched. ...

February 22, 2025

2 Ways to Create and Push a Repository on GitHub via Command Line

1. create a new repository on the command line echo "# test" >> README.md git init git add README.md git commit -m "first commit" git branch -M main git remote add origin https://github.com/kanelv/test.git git push -u origin main 2. push an existing repository from the command line git remote add origin https://github.com/kanelv/test.git git branch -M main git push -u origin main

March 9, 2025