Introduction
Introduction Statistics Contact Development Disclaimer Help
_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
Visit Hacker News on the Web
COMMENT PAGE FOR:
30 years of <br> tags
brianhama wrote 4 min ago:
I feel like I was reading a summation of my career.
vlucas wrote 6 hours 22 min ago:
This article is incredible. As someone who built their first website in
Geocities with HTML framesets and tables, the history represented in
this article is very accurate. Well done, OP!
aeve890 wrote 12 hours 15 min ago:
>Suddenly every framework, in every language, was copying Rails. Django
did it for Python. Laravel did it for PHP. CakePHP and CodeIgniter had
already been doing something similar, but Rails set the template that
everyone followed.
Nah son, I won't allow the great Symfony to be erased from history and
replaced with Laravel. Not on my watch.
chaos0815 wrote 12 hours 40 min ago:
Wasn't Gmail launched with an unlimited memory claim? I remember there
was a free memory display, although it kept increasing.
WA wrote 13 hours 4 min ago:
Good article! A few thoughts popped into my head while reading:
- My favorite deployment is rsync over SSH and occasionally, I still
upload a file over SFTP.
- MongoDB will always AWALYS be in my mind as the buggy database that
bought itself into dev minds with lots of money but ultimately was all
hype and risky to use from a business perspective. Turns out,
especially with the rise of TypeScript that most data has a solid
structure anyways and while NoSQL has its places, most projects benefit
from good old SQL.
- Slack launched in 2013? Man, time flies.
- I still hardly use Docker and just deploy straight to a VPS running
Debian.
- I remember the first years of TypeScript, which were kinda tough.
Many projects had no types. I sometimes considered to use one package
over another just because they had proper types.
- VSCode is a good thing and if you don't go too crazy with plugins, it
works stable and performant. I like it.
- Next.js gives me MongoDB vibes. An over-engineered, way too "magical"
framework that hijacked developer minds and is built on the weird
assumption that React, a DOM manipulation library, belongs on the
server. I never got the appeal and I will just wait this out.
Meanwhile, I'm having fun with Hono. Easy to build API-based backends
as well as MPAs with server-side HTML generation, runs on Node, Bun,
Deno and whatnot, feels lightweight and accessible and gives me a lot
of control.
inglor_cz wrote 5 hours 59 min ago:
I love Docker. So many problems solved at once. Deployments to
slightly different server environments were one of my banes of
existence.
tracker1 wrote 7 hours 3 min ago:
I use Docker a lot myself... even outside Kubernetes, I just find it
easier to work with Compose for semi-complex apps, dev environment or
production.
I think VS Code is probably more responsible for TypeScript
acceptance than any other project. Just having good interactions
with the editor I think brought a lot of the requests to add type
definitions to projects.
I'm with you on Next/Mongo... while as a Dev I liked a lot about
Mongo, I'd never want to admin it again, I'm fine with PostgreSQL's
JSONB for when I need similar features. On Next specifically,
usually avoid it... fatter clients aren't so bad IMO.
Edit: +1 for Hono too... Beyond that, Deno has become my main
scripting environment for all my scripting needs.
fud101 wrote 15 hours 3 min ago:
Such a bad title, i avoided reading this gem of an article because of
the title but glad I found my way to it eventually.
qingcharles wrote 18 hours 13 min ago:
This article was amazing. I hope the author continues to update it.
I've been creating web pages since 1993, when you could pretty much
read every web site in existence and half of them were just HTML
tutorials. I've lived through every change and every framework. 2025 is
by far the best year for the Web. Never has it been easier to write
pages and have them work perfectly across every device and browser, and
look great to boot.
mikeyinternews wrote 21 hours 46 min ago:
*
racl101 wrote 22 hours 58 min ago:
Good ol' br tag. Saves me from having to write padding and margin CSS.
davidpronk wrote 1 day ago:
Great read. I have fond memories of all the tricks we used to amaze
visitors and fellow developers.
Like using fonts that weren't installed on the visitors' computer using
sIFR.
[1]: https://mikeindustries.com/blog/sifr
qingcharles wrote 18 hours 30 min ago:
Wow. Totally forgot about the sIFR years.
cjstewart88 wrote 1 day ago:
Thanks for writing this :)
emilbratt wrote 1 day ago:
Im only half way through, but just wanted to share that I love this
kind of write up
1970-01-01 wrote 1 day ago:
Very nice article.
However, some very heavy firepower was glossed over.. TLS/HTTPS gave us
the power to actually buy things and share secrets. The WWW would not
be anywhere near this level of commercialized if we didn't have that in
place.
technion wrote 19 hours 43 min ago:
For a long time, the standard was that tls was only ever on the
credit card submission page. I remember when "finding
vulnerabilities" often mean viewing source and noting it still
submitted to a uardcoded http page.
tracker1 wrote 7 hours 0 min ago:
Yeah... seriously surprised more services didn't go HTTPS only just
for a simpler setup... the redirects/posts etc were always a mess.
I know early/pre 00's hardware had more overhead for https, but
even then.
tehjoker wrote 1 day ago:
I predict this will be an instant classic article. It concisely
contains most of the history a lot of newer hands are missing.
PaulDavisThe1st wrote 1 day ago:
I wanted this to end with something like:
"... and through it all, the humble tag has continued playing its role
..."
GoatOfAplomb wrote 1 day ago:
Fantastic read. I did most of my web development between 1998 and
2012. Reading this gave me both a trip down memory lane and a very
digestible summary of what I've missed since then.
rsync wrote 1 day ago:
"Virtual private servers changed this. You could spin up a server in
minutes, resize it on demand, and throw it away when you were done.
DigitalOcean launched in 2011 ..."
The first VPS provider, circa fall of 2001, was "JohnCompanies" handing
out FreeBSD jails advertised on metafilter (and later, kuro5hin).
These VPS customers needed backup. They wanted the backup to be in a
different location. They preferred to use rsync.
Four years later I registered the domain "rsync.net"[1].
[1] I asked permission of rsync/samba authors.
qingcharles wrote 18 hours 26 min ago:
Pre-2000, it wasn't exactly useful as a hosting environment as such,
but there were plenty of people renting unix shells by the month.
We'd use screen and nohup to leave services running while logged out.
outofmyshed wrote 1 day ago:
This is a great overview of web tech as I more or less recall it.
Although pre-PHP CGI wasn’t a big deal, but it was more fiddly and
you had to know and understand Apache, broadly. mod_perl & FastCGI made
it okay. Only masochists wrote CGI apps in compiled languages. PHP made
making screwy web apps low-effort and fun.
I bugged out of front-end dev just before jquery took off.
notatallshaw wrote 1 day ago:
> At one company I worked at, we had a system where each deploy got its
own folder, and we'd update a symlink to point to the active one. It
worked, but it was all manual, all custom, and all fragile.
The first time I saw this I thought it was one of the most elegant
solutions I'd ever seen working in technology. Safe to deploy the
files, atomic switch over per machine, and trivial to rollback.
It may have been manual, but I'd worked with a deployment processes
that involved manually copying files to dozens of boxes and following
10 to 20 step process of manual commands on each box. Even when I first
got to use automated deployment tooling in the company I worked at it
was fragile, opaque and a configuration nightmare, built primarily for
OS installation of new servers and being forced to work with
applications.
shimms wrote 1 day ago:
It’s been a while (a decade?!) but if I recall correctly Capistrano
did this for rails deployments too, didn’t it?
thunderbong wrote 22 hours 20 min ago:
Not just rails. Capistrano is tech stack agnostic. It's possible to
deploy a project with nodejs using Capistrano.
And yes, it's truly elegant.
Rollbacks become trivial should you need it.
AznHisoka wrote 23 hours 48 min ago:
I am now feeling old for using Capistrano even today. I think there
might be “cooler and newer” ways to deploy, but i never ever
felt the need to learn what those ways are since Capistrano gets
the job done.
copperx wrote 16 hours 1 min ago:
I remember using mina and it was much faster than Capistrano.
Sadly, it seems it's now unmaintained.
toast0 wrote 1 day ago:
> It may have been manual
It's pretty easy to automate a system that pushes directories and
changes symlinks. I've used and built automation around the basic
pattern.
Kuyawa wrote 1 day ago:
> All I needed was Notepad, some HTML, and an FTP client to upload my
files
That's what I still do 30 years later
brianwawok wrote 1 day ago:
Does your site resemble the ux of hacker news and craigslist?
Kuyawa wrote 8 hours 15 min ago:
Check for yourself
[1]: https://my.adminix.app/demo
recallingmemory wrote 1 day ago:
Enjoyable read and a nostalgic trip all the way back to when I learned
how to create websites on Geocities. Thanks for this.
samgranieri wrote 1 day ago:
Wow. This is an incredible article that tracks just about everything
I’ve done with the web over the past 30 years. I started with BbEdit
and Adobe PageMill, then went to dreamweaver for Lamp.
ripe wrote 1 day ago:
What a comprehensive, well-written article. Well done!
The author traces the evolution of web technology from Notepad-edited
HTML to today.
My biggest difference with the author is that he is optimistic about
web development, while all I see is shaky tower of workarounds upon
workarounds.
My take is that the web technology tower is built on the quicksand of
an out-of-control web standardization process that has been captured by
a small cabal of browser vendors. Every single step of history that
this article mentions is built to paper over some serious problems
instead of solving them, creating an even bigger ball of wax. The
latest step is generative AI tools that work around the crap by
automatically generating code.
This tower is the very opposite of simple and it's bound to collapse. I
cannot predict when or how.
jasperry wrote 1 day ago:
I was also impressed and read the whole thing and got a lot of gaps
filled in my history-of-the-web knowledge. And I also agree that the
uncritical optimism is the weak point; the article seems put together
like a just-so story about how things are bound to keep getting more
and more wonderful.
But I don't agree that the system is bound to collapse. Rather, as I
read the article, I got this mental image of the web of networked
software+hardware as some kind of giant, evolving, self-modifying
organism, and the creepy thing isn't the possibility of collapse, but
that, as humans play with their individual lego bricks and exercise
their limited abilities to coordinate, through this evolutionary
process a very big "something" is taking shape that isn't a product
of conscious human intention. It's not just about the potential for
individual superhuman AIs, but about what emerges from the whole ball
of mud as people work to make it more structured and interconnected.
1718627440 wrote 3 days ago:
> Every page on your site needed the same header, the same navigation,
the same footer. But there was no way to share these elements. No
includes, no components.
That's not completely true. Webservers have Server Side Includes (SSI)
[0]. Also if you don't want to rely on that, 'cat header body > file'
isn't really that hard.
[0]
[1]: https://web.archive.org/web/19970303194503/http://hoohoo.ncsa....
nine_k wrote 5 hours 53 min ago:
Well, frames and iframes both provide client-side "includes".
They cannot be arbitrary components though.
vaylian wrote 15 hours 38 min ago:
XSLT could have been the answer. But browsers are now dropping
support for it.
tannhaeuser wrote 1 day ago:
HTML was invented as an SGML vocabulary, and SGML and thus also XML
has entities/text macros you can use to reference shared documents or
fragments such as shared headers, footers, and site nav, among other
things.
1718627440 wrote 23 hours 37 min ago:
Not sure, why you are getting downvoted, as that was pretty much
the case before HTML5.
duskwuff wrote 17 hours 50 min ago:
Not one of the downvotes, but: as far as I'm aware, there was
never any syntax which could be used for HTML transclusion in the
browser. There may have been SGML or XML syntaxes proposed for
it, but none of them were actually implemented in this context.
ndriscoll wrote 3 hours 38 min ago:
XSLT was implemented in every browser like 25 years ago and has
the ability to include other document templates/fragments
client-side. It's exactly the functionality everyone always
says is missing.
tannhaeuser wrote 14 hours 55 min ago:
You can use entities to create static sites in advance, or by
including support in the browser. sgmljs can do both, and
simply using shared headers/footers for static site generation
from markdown and other SGML partials is explained in [1]:
[1]: https://sgmljs.sgml.net/docs/producing-html-tutorial/p...
duskwuff wrote 4 hours 39 min ago:
I think you're missing the point here. None of these things
were available to users writing HTML for web browsers in the
1990s.
tannhaeuser wrote 3 hours 38 min ago:
SP/OpenSP and older SGML tools were most certainly
available and used to assemble HTML docs from command line
apps in the 1990s for complex websites with lots of content
such as software documentation. The editor of the HyTime
spec with its strong focus on adapting and transforming to
multimedia and web was working with a training/education
company. W3C's long-term validator service ran off SP.
Gualdrapo wrote 1 day ago:
I think they meant that from a vanilla HTML standpoint
1718627440 wrote 1 day ago:
If they insist on only using vanilla HTML then the problem is
unsolved to this day. I think it is actually less solved now,
since back then HTML was an SGML application, so you could supply
another DTD and have macro-expansion on the client.
mixmastamyk wrote 1 day ago:
Object tag can do it. iframe also with limitations.
1718627440 wrote 23 hours 38 min ago:
Does it really? I think, this makes you have a wrapper and I
am not sure if you can get rid of all issues with "display:
contents". Also you are already in the body, so you can't
change the head, which makes it useless for the most idiomatic
usecase for that feature.
mixmastamyk wrote 20 hours 39 min ago:
Gets you header, footer, components. Most of head would be
nice but you typically want a custom title for example.
bigstrat2003 wrote 1 day ago:
Sure, but later in the article it says that when PHP came out it
solved the problem of not being able to do includes. Which again...
server-side includes predate PHP. I think that this is just an
error in the article any way you slice it. I assume it was just an
oversight, as the author has been around long enough that he almost
certainly knows about SSI.
NathanOsullivan wrote 12 hours 46 min ago:
I am off similar vintage to the author.
I have no idea when Apache first supported SSI , but personally I
never knew it existed until years after PHP became popular.
I would guess , assuming that `Options +Includes` cannot be done
by unprincipled users, that this being a disabled-by-default
feature it was inaccessible to majority of us.
rzzzt wrote 4 hours 9 min ago:
I have also dug around a bit to find out this one, and the
earliest httpd I could get my hands on is 1.3.0 which is hosted
on the Apache archive site: [1]
"src/modules/standard/mod_include.c" says:
/*
* http_include.c: Handles the server-parsed HTML documents
*
* Original by Rob McCool; substantial fixups by David
Robinson;
* incorporated into the Apache module framework by rst.
*
*/
Rob McCool is the author of NCSA HTTPd so it seems there is
direct lineage wrt. this feature between the two server
implementations.
[1]: https://archive.apache.org/dist/httpd/
hdgvhicv wrote 6 hours 52 min ago:
Archive.org tells me I was using SSI in Jan 1997. I didn’t
really understand what I was doing, but including the footer
and a visitor counter via an exec one which I presumably copied
from somewhere else. At the time I was still on windows and had
no real concept of a program being executed as a cgi or ssi, it
was all “copy this from Matt’s script archive to your
cgi-bin directory”
My shared hosting from claranet supported ssi via a .htaccess
configuration.
Technically php was around at that point, but I don’t think
it became popular until php3 - certainly my hosting provider
didn’t support it until then.
rzzzt wrote 1 day ago:
PHP's initial release announcement mentions includes as a feature
that can be used even if the server does not have SSI support:
[1]: https://groups.google.com/g/comp.infosystems.www.authori...
1718627440 wrote 1 day ago:
Does it, other than using PHP? To me it sounds like that
feature to use instead of SSI is PHP.
rzzzt wrote 15 hours 14 min ago:
I meant the "include" statement of PHP which you can use even
if your HTTP server is not configured for processing SSI
directives.
1718627440 wrote 4 hours 26 min ago:
But the HTTP server needs to be configured for PHP and were
are discussing the situation pre-PHP.
alehlopeh wrote 1 day ago:
HTML frames let you do this way back in the day
pimlottc wrote 1 day ago:
The article mentions that in the very next sentence
> You either copied and pasted your header into every single HTML
file (and god help you if you needed to change it), or you used
to embed shared elements. Neither option was great.
alehlopeh wrote 1 day ago:
I’m talking about the frameset and frame tags, not iframes.
pimlottc wrote 21 hours 20 min ago:
Ah, okay, you’re right, it’s been a long while since I
used those tags…
1718627440 wrote 3 days ago:
> For that, you needed CGI scripts, which meant learning Perl or C. I
tried learning C to write CGI scripts. It was too hard. Hundreds of
lines just to grab a query parameter from a URL. The barrier to dynamic
content was brutal.
That's folk wisdom, but is it actually true? "Hundreds of lines just
to grab a query parameter from a URL."
/*@null@*/
/*@only@*/
char *
get_param (const char * param)
{
const char * query = getenv ("QUERY_STRING");
if (NULL == query) return NULL;
char * begin = strstr (query, param);
if ((NULL == begin) || (begin[strlen (param)] != '=')) return
NULL;
begin += strlen (param) + 1;
char * end = strchr (begin, '&');
if (NULL == end) return strdup (begin);
return strndup (begin, end-begin);
}
In practice you would probably parse all parameters at once and maybe
use a library.
I recently wrote a survey website in pure C. I considered python
first, but do to having written a HTML generation library earlier, it
was quite a cakewalk in C. I also used the CGI library of my OS, which
granted was one of the worst code I ever refactored, but after, it was
quite nice. Also SQLite is awesome. In the end I statically linked
it, so I got a single binary to upload anywhere. I don't even need to
setup a database file, this is done by the program itself. It also
could be tested without a webserver, because the CGI library supports
passing variables over stdin. Then my program outputs the webpage on
stdout.
So my conclusion is: CRUD websites in C are easy and actually a breeze.
Maybe that also has my previous conclusion as a prerequisite: HTML
represents a tree and string interpolation is the wrong tool to
generate a tree description.
lelanthran wrote 18 hours 59 min ago:
> That's folk wisdom, but is it actually true? "Hundreds of lines
just to grab a query parameter from a URL."
No, because...
> In practice you would probably parse all parameters at once and
maybe use a library.
In the 90s I wrote CGI applications in C; a single function, on
startup, parsed the request params into an array (today I'd use a
hashmap, but I was very young then and didn't know any better) of
`struct {char name; char value}`. It was paired with a `get(const
char name)` function that returned `const char ` value for the
specified name.
TBH, a lot of the "common folk wisdom" about C has more "common" in
it than "wisdom". I wonder what a C library would look like today,
for handling HTTP requests.
Maybe hashmap for request params, union for the `body` depending on
content-type parsing, tree library for JSON parsing/generation, arena
allocator for each request, a thread-pool, etc.
Akronymus wrote 13 hours 17 min ago:
Just FYI the 's got swallowed by the HN formatting and made the
stuff inbetween italic.
struct { char *name; char *value}
can bypass the formatting by prepending 2 spaces
1718627440 wrote 4 hours 28 min ago:
> Just FYI the 's got swallowed
Ironically enough your *'s as well.
bryanlarsen wrote 1 day ago:
> HTML represents a tree and string interpolation is the wrong tool
to generate a tree description.
Yet 30 years later it feels like string interpolation is the most
common tool. It probably isn't, but still surprisingly common.
toast0 wrote 1 day ago:
The thing is, the browser needs the tree, but the server doesn't
really need the whole tree.
Building the tree on the server is usually wasted work. Not a lot
of tree oriented output as you make it libraries.
1718627440 wrote 1 day ago:
My point is that treating it as the tree it is, is the only way
to really make it impossible to produce invalid HTML. You could
also actually validate not just syntax, but also semantic.
> Not a lot of tree oriented output as you make it libraries.
That was actually the point of my library, although I must admit,
I haven't implemented actually streaming the HTML output out,
before having composed the whole tree. It isn't actually that
complicated, what I would need to implement would be to make part
of the tree immutable, so that the HTML for it can already be
generated.
nextaccountic wrote 15 hours 48 min ago:
There was a system with dependent types that ruled out invalid
html at compile time, even dynamically generated html (rather
than a runtime error, you would get a compile error if your
code did something wrong) [1] [2] Needless to say it wasn't
very practical. But there was one commercial site written in it
[3] (the site still exists but not sure if it's still written
in ur/web)
[1]: https://github.com/urweb/urweb
[2]: http://www.impredicative.com/ur/
[3]: https://github.com/bazqux/bazqux-urweb
vshabanov wrote 3 hours 43 min ago:
It's still written in Ur/Web. And the type-safety of Ur/Web
is the reason I started writing it -- I couldn't imagine
myself using untyped JavaScript.
Ur/Web is not very practical for reasons other than type
safety: the lack of libraries and slow compilation when the
project gets big. The language itself is good, though.
Nowadays, I would probably choose OCaml. It doesn't have
Ur/Web's high-level features, but it's typed and compiles
quckly.
1718627440 wrote 1 day ago:
Which is really sad. This is the actual reason why I preferred C
over Python[*] for that project, so I could use my own library for
HTML generation, which does exactly that. It also ameliorates the
`goto cleanup;` thing, since now you can just tell the library to
throw subtrees away. And the best thing is, that you can MOVE, and
COPY them, which means you can generate code once and then fill it
with the data and still later modify it. This means you can also
refer to earlier generated values to generate something else,
without needing to store everything twice or reparse your own
output.
[*] I mean yeah, I could have written a wrapper, but that would
have taken far more time.
flanfly wrote 2 days ago:
Good showcase. Your code will match the first parameter that has as
a suffix, no necessarily exactly (username=blag&name=blub will
return blag). It also doesn't handle any percent encoding.
stouset wrote 1 day ago:
Further, when retrieving multiple parameters, you have a
Shlemiel-the-painter algorithm.
[1]: https://www.joelonsoftware.com/2001/12/11/back-to-basics/
1718627440 wrote 1 day ago:
Thanks, good author. I also like to read him. Honestly not
parsing the whole query string at once feels kind of dumb. To
quote myself:
> In practice you would probably parse all parameters at once and
maybe use a library.
1718627440 wrote 2 days ago:
> Your code will match the first parameter that has as a suffix,
no necessarily exactly
Depending on your requirements, that might be a feature.
> It also doesn't handle any percent encoding.
This does literal matches, so yes you would need to pass the param
already percent encoded. This is a trade off I did, not for that
case, but for similar issues. I don't like non-ASCII in my source
code, so I would want to encode this in some way anyway.
But you are right, you shouldn't put this into a generic library.
Whether it suffices for your project or not, depends on your
requirements.
recursive wrote 1 day ago:
Ampersands are ASCII, but also need to be encoded to be in a
parameter value.
1718627440 wrote 1 day ago:
Yeah, but you can totally choose to not allow that in your
software.
recursive wrote 1 day ago:
That's true. Your argument about how short parameter
extraction can be gets a little weaker though if only solve
it for the easy cases. Code can be shorter if it solves a
simplified version of the problem statement.
stouset wrote 1 day ago:
This exact mindset is why so much software is irreparably broken
and riddled with CVEs.
Written standard be damned; I’ll just bang out something that
vaguely looks like it handles the main cases I can remember off
the top of my head. What could go wrong?
1718627440 wrote 1 day ago:
Most commenters seem to miss that this is the throwaway code
for HN, with a maximum allocated time of five minutes. I
wouldn't commit it like this. The final code did cope with
percent-encoding even though the project didn't took any user
generated values at all. And I did read the RFCs, which
honestly most developers I meet don't care to do. I also made
sure the percent-decodation function did not rely on the ASCII
ordering (it only relies on A-Z being continuous), because of
portability (EBCDIC) and I have some professional honor.
bruce343434 wrote 1 day ago:
I get that, but your initial comment implied you were about
to showcase a counter to "Hundreds of lines just to grab a
query parameter from a URL", but instead you showed "Poorly
and incompletely parsing a single parameter can be done in
less than 100 lines".
You said you allocated 5 minutes max to this snippet, well in
php this would be 5 seconds and 1 line. And it would be a
proper solution.
$name = $_GET['name'] ?? SOME_DEFAULT;
1718627440 wrote 1 day ago:
And in the code in C it looks like this, which is also a
proper solution, I did not measure the time, it took me to
write that.
name = cgiGetValue (cgi, "name");
if (!name) name = SOME_DEFAULT;
If you allow for GCC extensions, it looks like this:
name = cgiGetValue (cgi, "name") ?: SOME_DEFAULT;
shakna wrote 22 hours 24 min ago:
That would fail on a user supplying a multiple where you
don't expect.
> If multiple fields are used (i.e. a variable that may
contain several values) the value returned contains all
these values concatenated together with a newline
character as separator.
stouset wrote 4 hours 9 min ago:
In GP’s defense, there is no standard behavior in the
spec for handling repeated GET query parameters.
Therefore any implementation-defined behavior is
reasonable, including: keeping only the first, keeping
only the last, keeping one at random, allowing access
to all of them, concatenating them all with a
separator, discarding the entire thing, etc.
1718627440 wrote 16 hours 38 min ago:
Why? The actual implementation of cgiGetValue I am
talking about does exactly that:
> concatenated together with a newline character
jaimie wrote 3 days ago:
This was a very well written retrospective on web development. Thank
you for sharing!
martinky24 wrote 3 days ago:
I really enjoyed reading this, especially as someone who hasn’t done
much web front end work!
ksec wrote 3 days ago:
This is such a great read for those of us who lived through it and it
really should be Front Page on HN.
Really wished it added a few things.
>The concept of a web developer as a profession was just starting to
form.
Webmaster. That was what we were called. And somehow people were amazed
at what we did when 99% of us, as said in the article, really had very
very little idea about the web. ( But it was fun )
>The LAMP Stack & Web 2.0
This completely skipped the part about Perl. And Perl was really big, I
bet one point in time on the Web most web site were running on Perl.
Cpanel, Slashdot, etc. The design of Slashdot is still pretty much the
same today as most of the Perl CMS in that era. Soon after every one
knew C wouldn't be part of of the Web CGI-BIN Perl took over. We have
Perl Script all over the web for people to copy and paste, FTP upload
CHMOD before PHP arrives. Many forums at the time were also Perl
Script.
Speaking of Slashdot, after that was Digg. That was all before Reddit
and HN. I think there used to be something about HN like Fight Club
"The first rule of fight club is you do not talk about fight club," And
HN in the late 00s or early 10s was simply referred as the orange site
by journalist / web reporters.
And then we could probably talk about Digg v4 and dont redesign
something if it is working perfectly.
>WordPress, if you wanted a website, you either learned to code or you
paid someone who did.
There were a part of CMS war or blogging platform before it was called
a blog. There were many, including those using Perl / CGI-BIN. I
believe it was Movable Type Vs Wordpress.
And it also missed forums, Ikonboard based on Perl > Invision ( PHP )
vs Vbulltin. Just like CMS/ blog there used to be some Perl vs PHP
forum software as well. And of course we all know PHP ultimately won.
>Twitter arrived in 2006 with its 140-character limit and deceptively
simple premise. Facebook opened to the public the same year.
Oh I wished they mentioned about MySpace and Friendster. The Social
Network before Twitter and Facebook. I believe I have have my original
@ksec Twitter handle registered and loss access to it. It has been
sitting there for years. Anyone knows how to get it back please ping
me. Edit: And I just realised my HN proton email address hasn't been
logged in for months for some strange reason.
>JavaScript was still painful, though. Browser inconsistencies were
maddening — code that worked in Firefox would break in Internet
Explorer 6, and vice versa.
Oh it really missed the most important piece of web era. Firefox Vs IE.
Together we pushed Firefox to beyond 30% and in some cases 40% of
Browser market share. That is insanely impressive if we consider nearly
most of those usage were not from work because Enterprise and Business
PCs is still on IE6.
And then Chrome came. And I witness and realise how fast things can
change. It was so fast that without all the fans fare of Mozilla people
were willingly to download and install Google Chrome. And to this day I
have never used Chrome as my main browser. Although it has been a
secondary browser since the day it was launched.
>Version control before Git was painful
There was Hg / Mercurial. If anything taking over SVN it should have
been Hg. For whatever reason I have always been on the wrong side of
history or mainstream. Although that is mostly a personal preference.
Pascal over C and later Delphi over Visual C++, Perl over PHP. FreeBSD
over Linux. Hg over Git.
>Virtual private servers changed this. You could spin up a server in
minutes, resize it on demand, and throw it away when you were done.
DigitalOcean launched in 2011 with its simple $5 droplets and friendly
interface.
Oh VPS was a thing long before DO. DO was mostly copying Linode from
the start. And that is not a bad thing considering Linode at the time
was the most developer friendly VPS provider. Taking the crown from I
believe Rackspace? Or Rackspace acquired one of those VPS provider
before Linode became popular. I cant quite remember.
>Node.js .....Ryan Dahl built it on Chrome's V8 JavaScript engine, and
the pitch was simple: JavaScript on the server.
I still think Node.js and Javscript on server is a great idea but wrong
execution especially on Node.js NPM. One could argue there is no way we
would have known without first trying it and that is certainly true.
And it was insanely overhyped in the post Rails Era around 2012 - 2014
because Fail Whales of twitter and Rails couldn't scale. I think the
true spirit successor is Bun, integrating everything together very
neatly. I just wish I could use something other than Javascirpt. ( On
the wrong side of history again I really liked Coffeescript )
>The NoSQL movement was also picking up steam. MongoDB
Oh I remember the over hyped train of NoSQL MongoDB on HN and internet.
CoachDB as well. In reality today, SQLite, PlanetScale Postgres /
Vitess MySQL or Clickhouse is enough for 99% of use case. ( Or may be I
dont know enough NoSQL to judge it usefulness )
>How we worked was changing too. Agile and Scrum had been around since
the early 2000s,
Oh the worst part of Agile and Scrum isn't what it did to Tech
Industry. It is what it did to companies outside of Tech industry. I
dont think most people realise by mid 2010s tech was dominating
mainstream media and words like Agile were floating around in many
other industries and they all need to be Agile. Especially American
companies. Finance companies who were not tech but decided to uses
these terms because it was Hip or Cool as part of their KPI along with
consultant firms like McKinsey, the Agile movement took over a lot of
industry like plague.
This reply is getting too long. But I want to go back to the premise
and conclusion of the post,
>I'm incredibly optimistic about the state of web development in
2025....... We also have so many more tools and platforms that make
everything easier.
I dont know and I dont think I agree. AI certainly make many steps we
do now easier. But conceptually speaking everything is still a bag of
hurts, no body is asking why do we need those extra steps in the first
place. Dragging something via FTP is still easier. Editing on WYSIWYG
Dreamweaver is way more fun. Just like I think Desktop programming
should be more Delphi like. In many ways I think WebObject is still
ahead of many web frameworks today. Even Vagrant is still easier than
we have today. The only good things is that Bun, Rails, HTMX and even
HTML / Browser are finally to be swinging back to another ( my
preferential ) direction. Safari 26.2 is finally somewhat close to
Firefox and Chrome in compatibility.
The final battle left is JPEG XL, or may be AV2 AVIF will prove it is
good enough. The web is finally moving in the right direction.
squimmy26 wrote 3 days ago:
Brilliant article, haven't read an industry retrospective that's as
high quality as that for a while.
dansjots wrote 3 days ago:
What an incredible article. More than its impressive documented scope
and detail, I love it foremost for conveying what the zeitgeist felt at
each point in history. This human element is something usually only
passed on by oral tradition and very difficult to capture in cold,
academic settings.
It’s fashionable to dunk on “how did all this cloud cruft become
the norm”, but seeing a continuous line in history of how
circumstance developed upon one another, where each link is
individually the most rational decision at their given context, makes
them an understandable misfortune of human history.
<- back to front page
You are viewing proxied material from codevoid.de. The copyright of proxied material belongs to its original authors. Any comments or complaints in relation to proxied material should be directed to the original authors of the content concerned. Please see the disclaimer for more details.