Archive

  1. Java 25 After the Hype: 5 Features That Actually Matter

    ~ cat post <<

    Java 25 After the Hype: 5 Features That Actually Matter

    When a new Java LTS drops, the internet goes through its usual cycle: launch posts, conference talks, YouTube thumbnails screaming “GAME CHANGER,” and LinkedIn hot takes about how everything has changed forever.

    Then reality settles in.

    A few months after the release of Java 25, most teams aren’t rewriting their systems. They’re shipping features, fixing bugs, and trying to keep production stable. That’s when we can finally answer a more interesting question:

    Which Java 25 features are still being discussed and actually used?

    This isn’t a launch recap. This is a “post-hype” filter. Here are five Java 25 features that have proven they’re more than marketing bullets.

    1. Structured Concurrency: Concurrency That Reads Like Logic

    For years, Java concurrency meant juggling ExecutorService, Future, timeouts, and cancellation semantics that were easy to get wrong.

    Structured Concurrency changes the mental model. Instead of spawning detached tasks and hoping everything is cleaned up properly, you treat concurrent tasks as a single logical unit.

    Before

    ExecutorService executor = Executors.newFixedThreadPool(2);
    
    Future<User> userFuture = executor.submit(() -> fetchUser());
    Future<Orders> ordersFuture = executor.submit(() -> fetchOrders());
    
    User user = userFuture.get();
    Orders orders = ordersFuture.get();
    
  2. Spring Boot 4: Brief Upgrade Guide and Code Comparison

    ~ cat post <<

    Spring Boot 4 vs. 3: Brief Upgrade Guide and Code Comparison

    If you’ve been following my blog, you know I love a good migration story. Whether it’s moving to TanStack Start or refining shadcn/ui forms, the goal is always the same: better developer experience and more robust code.

    Today, we’re looking at the big one. Spring Boot 4.0 is officially out, and it’s arguably the most important release since 3.0. It moves the baseline to &Java 17 (with a massive push for Java 25), adopts Jakarta EE 11, and introduces features that finally kill off years of boilerplate.

    Let’s look at exactly what changed and how your code will look before and after the upgrade.

    1. Native API Versioning

    For years, versioning an API in Spring meant custom URL paths, header filters, or complex RequestCondition hacks. Spring Boot 4 brings this into the core framework.

    The Spring Boot 3 Way (Manual Pathing)

    // You had to manually manage the path segments
    @RestController
    @RequestMapping("/api/v1/orders")
    public class OrderControllerV1 { ... }
    
    @RestController
    @RequestMapping("/api/v2/orders")
    public class OrderControllerV2 { ... }
    

    The Spring Boot 4 Way (Native Mapping)

    Now, versioning is a first-class citizen. You can keep the path clean and let Spring handle the routing logic via headers, query params, or path segments.

  3. JSON is Making You Lose Money!!! Slash LLM Token Costs with TOON Format

    ~ cat post <<

    JSON vs TOON Token Explosion

    Let's be real: every time you shove a bloated JSON blob into an LLM prompt, you're literally burning cash. Those curly braces, endless quotes, and repeated keys? They're token vampires sucking your OpenAI/Anthropic/Cursor bill dry. I've been there – cramming user data, analytics, or repo stats into prompts, only to hit context limits or watch costs skyrocket.

    But what if I told you there's a format that cuts tokens by up to 60%, boosts LLM accuracy, and was cleverly designed for exactly this problem? Meet TOON (Token-Oriented Object Notation), the brainchild of Johann Schopplich – a dev who's all about making AI engineering smarter and cheaper.

    Johann nailed it with TOON over at his original TypeScript repo: github.com/johannschopplich/toon. It's not just another serialization format; it's a lifeline for anyone building AI apps at scale.

    Why JSON is Robbing You Blind in LLM Prompts

    JSON is great for APIs and config files. But for LLM context? It's a disaster:

    • Verbose AF: Braces {}, brackets [], quotes around every key and string – all eating tokens.
    • Repeated Keys: In arrays of objects, every row repeats the same field names. 100 users? That's 100x "id", "name", etc.
    • No Built-in Smarts: LLMs have to parse all that noise, leading to higher error rates on retrieval tasks.
    • Token Explosion at Scale: A modest dataset can balloon to thousands of unnecessary tokens.

    Result? Higher costs, slower responses, and more "context too long" errors. If you're querying GPT-5-nano or Claude with tabular data, JSON is quietly making you poor.

  4. Demystifying Object-Oriented Programming: Pt. 2

    ~ cat post <<

    Demystifying Object-Oriented Programming: Pt. 2

    Welcome back! At the end of our last post, we hit a bit of a roadblock. We had successfully organized our F1, PickupTruck, and SUV classes to inherit from a base Car class. But then, we were faced with a new challenge:

    A bicycle, a speedboat, and an electric car. They are all vehicles, but trying to force them into a single Vehicle inheritance hierarchy would be a nightmare. A speedboat doesn't have wheels, and a bicycle doesn't have an engine in the traditional sense. A simple extends Vehicle starts to feel clunky and wrong. How do we model things that share behaviors but are fundamentally different things?

    The answer lies in moving beyond the idea that everything must share a common ancestor and instead thinking about what they can DO.

    Interfaces: A Contract of Behavior

    Let’s ask a different question. Instead of asking what these objects are, let's ask what they have in common from a user's perspective. A person can:

    • Steer them
    • Make them go forward
    • Slow them down

    In Object-Oriented Programming, when we want to guarantee that different classes share a common set of behaviors, we use an INTERFACE. Think of an interface not as a blueprint for an object, but as a contract. It’s a list of methods that a class promises to implement. It defines WHAT a class can do, but not HOW it does it.

    Let's create a contract for anything that can be driven. We'll call it Drivable.

    // Interface
    public interface Drivable {
      void turnLeft(double degrees);
      void turnRight(double degrees);
      void accelerate();
      void brake();
    }
    
  5. [UPDATED!] Seamless Forms with shadcn/ui and TanStack Form

    ~ cat post <<

    Seamless Forms with shadcn/ui and TanStack Form

    In my post, "Life after Next.js: A New and Sunny Start," I talked about my journey migrating to TanStack Start and the freedom it brought. One of the loose ends I mentioned was the form situation. I'm a big fan of the aesthetics and developer experience of shadcn/ui, but its default form component is built on react-hook-form. As I'm going all-in on the TanStack ecosystem, I naturally wanted to use TanStack Form.

    This presented a classic developer dilemma: do I stick with a component that doesn't quite fit my new stack, or do I build something better? The answer was obvious. I couldn't find a clean, existing solution that married the beauty of shadcn/ui with the power and type-safety of TanStack Form. So, I decided to build it myself.

    Today, I'm excited to share the result: a component that seamlessly integrates shadcn/ui with TanStack Form, preserving the core principles of both libraries. It's type-safe, easy to use, and maintains that clean shadcn/ui look and feel.

    You can check out the component's website and find the full source code on the GitHub repository.

    Why Bother?

    TanStack Form offers incredible power with its framework-agnostic, type-safe approach to form state management. shadcn/ui, on the other hand, provides beautiful, accessible, and unopinionated components. The goal was to get the best of both worlds without any compromises. This component acts as the bridge, giving you:

    • Full Type-Safety: Infer types directly from your validation schemas (like Zod, Valibot, etc.).
    • Seamless TanStack Integration: Leverage TanStack Form’s state management and validation logic.
    • Consistent shadcn/ui Styling: Use the form components you already know and love.
  6. JavaScript 2025: New Stable Features to Boost Your Code

    ~ cat post <<

    JavaScript 2025: New Stable Features to Boost Your Code

    Hey there! JavaScript became 30 years old in 2025. Despite being created for browsers’ front-end code, it gained more space than expected in the backend. ECMAScript 2025 (ES16), released in June, brought a handful of stable features that make coding smoother without forcing you to kneel before the latest framework altar. In this post, I’ll walk you through the standout additions with practical examples, and for each, I’ll show how you’d do the same thing the old-school way — pre-ES2025. Spoiler: the new features save you some headaches, but the fundamentals still hold strong. Let’s dive in, no fluff, just code!

    1. Iterator Helpers: Functional Programming Without the Headache

    What’s New: Iterator Helpers introduce a global Iterator object that lets you chain operations like .map() and .filter() on any iterable (arrays, sets, generators) in a memory-efficient, lazy way. It’s functional programming without the bloat of intermediate arrays.

    Example with Iterator Helpers: Filtering and transforming a leaderboard of scores.

    //
    // Before
    //
    const scores = [100, 85, 90, 95, 70];
    const topScores = scores.filter((score) => score > 80).map((score) => `Score: ${score}%`);
    console.log(topScores); // ["Score: 100%", "Score: 85%", "Score: 90%", "Score: 95%"]*
    
    //
    // After ES2025
    //
    const scores = [100, 85, 90, 95, 70];
    const topScores = Iterator.from(scores)
    	.filter((score) => score > 80)
    	.map((score) => `Score: ${score}%`)
    	.toArray();
    console.log(topScores); // ["Score: 100%", "Score: 85%", "Score: 90%", "Score: 95%"]*
    
  7. Demystifying Object-Oriented Programming: Pt. 1

    ~ cat post <<

    Demystifying Object-Oriented Programming: Pt. 1

    The Misconception of Object-Oriented Programming

    When we think about object-oriented programming, we often relate it to real-world objects. The problem is this: associating object-oriented programming with tangible real-world things doesn’t work in 80% of cases, and this brings more difficulties than clarity for beginners.

    Back in 1996, when I started programming in Delphi 4, its biggest competitor was Visual Basic, but Delphi was OBJECT-ORIENTED. If I had stopped at that programming course, I still wouldn't have known what this famous object-oriented programming was, and neither would the course instructor.

    I worked at various companies and on commercial software projects in Delphi that were far from truly using object-oriented programming. Most Delphi programmers I met thought object-oriented programming was about dragging a button onto a screen, double-clicking it, and creating a function for that button, for example.

    So, let’s talk about what object-oriented programming REALLY is.

    First, look at the image below and tell me what you see.

    An F1, a Pickup Truck, and an SUV

    Obviously, you see three cars: a pickup truck, an F1 car, and an SUV. The important question is: if these cars are so different, how do you know they are all cars? What makes a car a car and not a table or a piano?

    There’s a concept of what a car is that’s already ingrained in your mind. A car IS LIKE THIS and SERVES THIS PURPOSE. In object-oriented programming, this concept that defines what a car is called a CLASS: it’s the definition of WHAT a certain object is. The car you touch, the one you drive from point A to point B, is what we call an OBJECT. It’s the materialized concept. This materialized concept is technically called an INSTANCE. That’s why you’ll sometimes read that an object is an instance of a class. Always translate this as: the object you touch is a concept that has been materialized.

  8. A Deep Dive into JWT for Secure Web and Mobile Sessions

    ~ cat post <<

    A Deep Dive into JWT for Secure Web and Mobile Sessions

    In the early days of electronic computing, there was no concern for privacy or security because computers were not accessible to just anyone, and even if someone had access to a computer, the chances of them knowing how to use it were negligible. Moreover, computers were not typically used to store data beyond a few dozen numbers; they were primarily used as calculators.

    Two decades later, even with personal computing, systems built in Clipper, Delphi, or Visual Basic ran locally or accessed databases hosted on local networks using a ring topology with coaxial cables and BNC connectors, or even on the workstation itself. The term "user session" was practically confined to academic environments and had little relevance in the real world.

    The real concern for data privacy and security emerged only with the popularization of the Internet, when the first applications using network communication began to appear, and the term "user session" started to gain traction among developers.

    Unlike applications that ran locally on the user's workstation, web applications are executed on application servers, with the frontend running in the user's browser. To maintain the user experience, the application server saves some user session information in memory or a database and sends a session ID to the user's browser, typically via cookies. This creates two problems. The first is that if a malicious person intercepts this ID, they can hijack the user's session, a process known as session hijacking. The second issue is that servers now need to allocate memory space or maintain database records to store some user information, without knowing whether the user is still actively using the application. With a small number of users, this may not be an issue, but with thousands or millions of users, things get significantly more complicated.

    To address the first problem, several mechanisms were created, such as CSRF tokens, session binding to the user's IP address, and others. For the second issue, stateless applications emerged. Client-side applications became more complex, leveraging JavaScript, and servers began storing minimal user session data. However, with the introduction of the iPhone, which popularized smartphones and native apps running on devices, it became necessary to find a way to maintain user sessions for these devices as well.

    It wouldn’t be efficient to have different methods for managing sessions across different devices, and since server-side applications had become stateless, why not apply the same principle to user sessions? This is where the JSON Web Token, more commonly known as JWT, was introduced.

    The Token

    JWT, defined in RFC 7519, was created to meet the needs of web and mobile applications in a stateless, scalable way, supporting cross-domain environments like APIs and single sign-on systems. So, let’s break down how it works.

    A JWT is composed of three Base64-encoded JSON strings, divided into three parts separated by dots: Header.Payload.Signature:

  9. Life after Next.js: A New and Sunny Start

    ~ cat post <<

    Life after Next.js: A New and Sunny Start

    A few weeks ago, I posted about my decision to move away from Next.js and the reasons behind that choice. Since projects can't just halt, I couldn't dwell too long on my decision. I had to consider several requirements for my new stack, including:

    • Using React as the frontend library.
    • Support for Server-Side Rendering (SSR).
    • Server-side functions.
    • Middleware support.
    • Enough flexibility to integrate libraries like React-Hook-Form and Axios.
    • A caching mechanism independent of the REST library.

    Initially, I was torn between Remix and TanStack Start. However, Remix lacks middleware support, which would necessitate significant architectural changes and a less ideal way to manage route access. Furthermore, in recent weeks, the Remix project announced a substantial shift in direction for its V3 version. While this new direction appears very promising, it would eventually mean I'd have to rewrite everything again, and that's definitely not something I want.

    TanStack Start, on the other hand, is currently in beta. Yet, it meets all my project's requirements and is built on Vite, making the project far more flexible and independent. Beyond what I've already outlined, the TanStack libraries have an excellent reputation, and this point deserves a small digression:

    Utah probably isn't the first place you'd pick to catch a tan, but if your name is Tanner and you have a good sense of humor, you might just call your TypeScript library stack "TanStack." This textual detour is about him: Tanner Linsley is a programmer renowned for his profound commitment to the open-source community, developer experience, and type safety. TanStack libraries are used by hundreds of thousands of developers and adopted by everyone from startups to Fortune 500 giants. This alone lends significant credibility to TanStack Start, which, at least for me, weighs heavily in my decision for a long-term project.

    Getting back on track, despite its beta status, the development team is now focused on stabilizing the framework rather than building new features, so no breaking changes are expected anymore.

    Still, you might argue that adopting a beta framework for my project carries a high risk, and I agree. However, there are other valid points to consider for my project's current stage. As I've already explained, including in the previous post, going with a no-framework approach would demand a massive effort. Staying with Next.js under Vercel's grip was out of the question. This left me with Remix, which I've already discarded, and TanStack Start, which was my choice. And here's the unexpected upside for me: working with a beta framework also has its advantages.

    As soon as I ran into the first development hurdles, I was directed to the TanStack Discord server. There, numerous developers exchange information and answer questions about the libraries. The project's own developers actively respond to queries, report bugs, ask for new issues to be opened, etc. Even Jack Herrington is there, participating actively!

  10. Post Inception

    ~ cat post <<

    Post Inception

    A friend of mine, Rogério Lino, presented WordPress to me in 2005. It was an amazing blog tool: simple, fast, and very easily customizable.

    20 years later, and a dozen abandoned blogs that I've forgotten, I have created my website and professional blog. When I encountered WordPress again and tried to use it for my new blog, I found a bloated tool with excessive options, unnecessary plugins, managers, etc.

    I tried to customize it as I wished, with a layout resembling old green-screen CRT monitors. I gave up quickly and decided to look for an alternative.

    A few days later, I discovered 11ty (Eleventy). It is a static site generator that uses JavaScript, a template language of your choice for layout, and Markdown for post writing.

    In my post about the need for an HTML replacement, I wrote that XML is dying because of plenty of better options. In the same way, Markdown is by far better than HTML for writing text. It requires less code to achieve the same result. Unfortunately, I can't get rid of HTML completely yet, but for my posts, I am now free of it.

    I have created an awesome workflow for my posts (at least for me):

    I write my posts in Notion, where I can keep them, make changes, add links, leave, come back, change them again until I decide I’m done. Afterwards, I ask an LLM to review my poor English grammar—it used to be better when I was a teenager. No shame in that; it would be worse to post incorrect texts. When I finish adjusting the text, I ask an LLM to create an image to illustrate the post, and it's ready to go.

    Now, this is where things get really interesting. I copy the text from Notion into a Markdown file in my project along with its image. Then, I commit and push the project to the release branch on GitHub, where an Action is automatically executed. This action builds the HTML project and synchronizes the files with my hosting via FTP.

    11ty also generates the sitemap, robots.txt, and RSS feed XML (🤢). I really don't use SEO tools for writing: they tend to make my posts feel unnatural. If you don't the way I write, don't read; it's my memory leakage.

    For blog comments and reactions, I use a simple tool called Giscuss: this tool uses GitHub discussions to store these comments and reactions. You only need to configure the plugin to use a public repository discussion on GitHub. The readers must log in with their GitHub accounts to comment.

  11. This is not another post about the developer apocalypse

    ~ cat post <<

    This is not another post about the developer apocalypse

    Hundreds of professions have been completely obliterated since the human brain got smart enough to use a rock shard to defend itself. Today, a fan is so cheap that almost everyone can afford one. Hundreds of years ago, to stay cool, you had to be rich to hire someone to wave a giant fan while you did whatever you wanted.

    The creation of LLMs first created a mesmerizing feeling, but also brought back prophets of newly doomed professions. They’re not wrong: many professions will die, as many have before, and some should already be extinct. IT professions are currently marked for obsolescence, but the people predicting their demise are often the same ones selling AI and LLM solutions.

    However, this post isn’t about the downfall of any profession; it’s about a paradigm shift.

    Modern fans are not only less expensive than old-school human fans but also far more efficient, powerful, tireless, and privacy-preserving. A human fan doesn’t make sense anymore.

    But programmers? I’m sorry, you’re free to dislike them, but you can’t get rid of them: a well-prepared person with a machine is better than a machine alone, and there are many things they can do together.

    The Human in the Loop: Augmentation, Not Replacement

    Although writing software tests is good practice, many companies, especially smaller ones, don’t write them or write them poorly. Why? Writing good tests to cover all happy and sad paths is boring and expensive. Good tests require more lines of code than the tested code itself, and with limited budgets and time, it’s a real problem. On the other hand, using AI to detect potential bugs and implement automated tests is awesome.

    Even though they can generate automated tests, LLMs will always hallucinate: they create tests that don’t make sense and sometimes claim the tests are faulty and should be adjusted to fit the function’s results. This is where developers shine: they can evaluate these issues and make the proper interventions. This human oversight is crucial not just for correcting errors, but for managing the scale of a project.

    To put it another way, let’s suppose you, a regular person, decide to build a house tile by tile. It will take you months and cost more than you can afford. But when you see a house, your brain instantly recognizes it as a house. The same applies to a skyscraper, a neighborhood, or an entire city, and it applies to many things LLM AIs can do. While they can build fast, we can quickly check if it works.

    Those used to LLMs for development know: you must request small portions of code, or you have to write lengthy prompts to specify all restrictions. The latter is a waste of time, while the former is faster than writing every line of code.

  12. Don't Marry Next.js: My Warning from the Trenches

    ~ cat post <<

    Don't Marry Next.js

    Choosing the right development stack is a thoughtful decision. While selecting the right stack might not offer immediate, tangible benefits, picking the wrong one can lead to significant challenges. I learned this the hard way, and my advice is clear: don't marry Next.js.

    My initial foray into Next.js, a proof of concept, failed miserably. Though I had some React experience, I was still searching for the ideal frontend framework. I explored Svelte, but was put off by .svelte file extensions, the need to explicitly declare lang="ts" in every TypeScript block, and the co-mingling of TS and HTML in the same file. The ongoing transition from Svelte 4 to 5, with significant changes and deprecations on the horizon, also contributed to my decision to eliminate it as a contender – a decision I now regret.

    Pure React, however, wouldn't meet my needs. I disliked the idea of client-side fetching for backend APIs, especially since using JWTs in such a scenario isn't advised. Indeed, implementing a server-side solution from scratch felt like a significant time and effort investment. I also explored Angular and found it impressive, but noted fewer frontend component options compared to React. As someone who lacks web design talent, this was a substantial consideration.

    The overwhelming hype around Next.js eventually convinced me to give it a try. Next.js 13, with its newly released App Router, seemed like the cutting edge. I could end the story here, but reality had other plans.

    Dealing with cookies proved to be a significant headache. Some requests couldn't even read cookies, despite my understanding that they needed to be set server-side. After countless hours debugging and searching for solutions to client-side requests failing to access cookies, I hit a wall. It was my first major disappointment. I reset the application, deleting the .next and node_modules directories. I scoured the documentation and the web for a solution, quickly realizing many other developers were grappling with the same unpredictable behavior. The solution? I wish I knew: the next day, it simply worked like a charm.

    A few days later, I found myself replacing Next.js's native fetch API with Axios. The API's caching mechanism began to exhibit strange behavior. I can assure you, caching has a personality of its own. Even when seemingly inactive, it delivered unwanted results for requests that never even reached the backend API. Axios eventually solved that problem and allowed me to create an interceptor for JWT token rotation, but then… I started struggling with cookies again.

    At this point, the only workaround I found was to create a POST method within the Next.js application (which otherwise had no endpoints) solely to store JWT tokens. This forced me to implement an API key and restrict service access exclusively to the Next.js application. Ugly.

    Why Am I Still Using This?

    After fighting Next.js for so long – grappling with its poor and obscure documentation, abandoning its native solutions, and adopting alternatives – a stark realization hit me: Why am I still using this?

  13. The Traditional Web Should Die

    ~ cat post <<

    The Traditional Web Should Die

    Call for IT Pros to Rethink Web Technology

    TL;DR;

    Ever spent hours debugging a cross-browser CSS issue only to find it’s an obscure rendering bug? What if the web didn’t have to be this painful? As IT professionals, you confront daily challenges with cross-browser bugs, XSS vulnerabilities, and framework churn. The culprit? HTML, JavaScript, and CSS—an outdated trio. This post argues that it’s time to rethink web technology for a simpler, more secure future.

    The Origins of the Web Trinity

    Throughout the history of the computer, many technologies have been created and evolved. Nevertheless, until now, nobody has faced one of the biggest technology problems: the HTML + JavaScript + CSS Trinity for web applications. HTML has been used beyond more than it was designed for. As a result, these unforeseen uses created unexpected complexity. Not only does this burden the IT professionals, but it also prevents the web from leaping ahead. The HTML+JS+CSS trinity costs companies billions annually in maintenance and security fixes.

    The Birth of HTML

    In 1989, Tim Berners-Lee created the World Wide Web. At that time, the most powerful computer a person could have was an 80386 PC with the impressive Clock Speed of 20MHz. The same computer would come with 16MB of RAM in its more glorious version, and 100MB of HDD space.

    Although the first HTML-based website only came to life in 1991, it supported only static pages. The internet was designed and imagined for computers that couldn't handle a single background weather service that the cheapest smartphones run without any hassles. They couldn't even imagine what HTML would become later.

    The Web Technology Pandora’s Box

    Subsequently, when a NCSA researcher assistant called Rob MacCool, created the Common Gateway Interface in 1993, Pandora's box was opened. CGI was a way to integrate C and Perl with HTML and generate dynamic pages.

    CGI came before the <a> and <p> HTML tags being formalized, which happened only in 1994 with HTML 2.0. It created abilities beyond sharing static documents. Now it would be possible to sell goodies and information through the web. This new idea ran faster than a trail of gunpowder, and new problems emerged.

~ <<