Full Title: Serverless vs Client-Side Rendering vs Dynamic Content (Server-Side Rendering) vs Compiled/Code Generation for Static Sites
Prelude
There is a lot of confusion about the term "dynamic website" for laypeople. Let's clear that up first: "dynamic" here doesn't mean sleek animation or any sort of table filtering—it refers to whether the website content itself is variable, can change based on conditions, or is fetched from some server or database. The essence of a dynamic website is that it's constructed at runtime, can change based on conditions, and is opposed to "static," meaning the contents are fixed and will not change unless someone modifies the source. In the case of the web, the source usually means the HTML itself.
Also, "serverless" doesn't mean there is no server. There must be a server machine hosted by someone somewhere to do the computation. Serverless means we, as the developers, do not maintain the server ourselves, and there is some service that hosts all the contents, including the web server we need. To give you an exaggerated example: technically speaking, if you hire a contractor for $10,000 to develop and maintain a website for you—so whenever you need to update or modify the website in any way, you go to the contractor and talk to them, and throughout this process you never manually oversee the website or the server—then it's also "serverless."
Quick Recap / TL;DR
Here are the key ideas:
Serverless may feel easy and highly scalable operationally, but there are so many service providers that picking the right one becomes a long process of trial and error. On the other hand, serverless has its pitfalls—it's not all that good.
Client-side rendering (and single-page applications) works very well with serverless—we can use a static server to host the initial webpage and JS, then use JS to fetch data from a CDN or other static sources, rendering everything in the user's browser. However, one must note that most web crawlers (including AI agents) simply do not play well with such dynamic/runtime-generated content—and it’s simply impossible to predict in most cases. So forget about SEO if we rely on serverless. Forget about path-based URLs or endpoints since they are not possible in client-side rendering. The only thing we might be able to do is use query parameters—but again, such URLs must first be exposed to the search engine at build time for them to be usable from a search engine perspective. Otherwise, their only use is mnemonic or for saving the address of specific things.
Dynamic Content: Besides client-side rendering, there are ways to implement dynamic content from a server/process perspective. On the server side, we can handle specific query parameters or URL paths and let the server return templated HTML directly. Or we could simply compile the entire website as static webpages.
Server-Side Rendering binds "endpoints" to specific HTTP methods and URLs (path and query parameter combinations). When a request is received, the server returns the completed response (e.g., in HTML form). This is very easy for search engines to index since it's "static" in the sense that each address uniquely maps to a specific response, and no JavaScript runtime is needed to understand what content is there.
Compiled Source: It's surprisingly easy to simply compile entire websites from some source, e.g., from a database or scattered files. And one certainly does not require advanced frameworks for that. In a nutshell, it's simply a string replacement script. Honestly, this is way more effective for pure content websites than using any of the common frameworks (e.g., ASP.NET Core, Flask, Ruby on Rails, etc.). Compiled sources are very friendly to serverless deployment since most static web hosts are willing to distribute them for free.
Overview
To really understand the problem, its causes, and the solutions, we must review some basic concepts. Since this is a dev log, I will just enumerate the topics without delving into much detail:
- HTTP requests and responses. One doesn't need to understand the entire HTTP protocol, but being able to read raw requests and responses (which are usually in plain text format) is helpful, and one should understand the difference between headers and body content.
- HTTP methods, URL paths, and query strings.
- How very basic and barebones HTTP servers work—I happen to have a demo example here for C#.
- How to do client-side rendering in JS with fetched content.
Remarks
We are in the 2020s, and AI/LLMs are ubiquitous, but it's surprising how little the HTML/webpage landscape has changed regarding static vs. dynamic websites. Search engines and AI agents simply can't and won't work well with client-side rendered content due to the fact that the outcomes of runtime JavaScript evaluation are unpredictable.
Top comments (0)