A Web Developer from 2001 Wouldn’t Even Recognize this World

I work with people much younger that I, but the reality I’m discussing in this article is really just 12 years ago. It feels like another era entirely. This is especially true if you develop anything that touches the web.

When I started my career it was all about web applications that involved full round-trips to a server. You had a browser (or a WAP phone), your browser makes a request for a web page, waits a few seconds (a few seconds!) and you get a fully assembled HTML page in return. It didn’t matter because the Web was still so full of novelty we were just happy enough to be able to do things like read the news online. Maybe your local newspaper had a website, most likely they didn’t. There was no YouTube. Web pages weren’t really connected together in the way they are now. Back then it wasn’t like loading TheStreet.com required a bunch of asynchronous calls out to social networks to populate Like buttons – there was no social network. It was just HTML and Images, and it took forever. It was fine.

My first two jobs were developing an in-house cross promotional tool for an online gaming company named Kesmai in Charlottesville in 1997, and then I moved to New York to work for TheStreet.com in 1999. Web “applications” at that time were just an inch beyond putting some scripts in cgi-bin. At Kesmai it was Perl-based CGI scripts. Between Kesmai and TheStreet I was working on systems that used a proprietary Netscape server product. And, at TheStreet.com we were using Dynamo behind Apache, so we had JHTML and Droplets and that was my first encounter with a site that had to scale. We had a TV show on Fox and maybe something like 600-700 people could use the site at the same time. (Again, that was huge back then, how times have changed?) Everything was template-based, servlets were around, maybe, but I don’t really remember diving into the Servlet API and JSPs until Struts came along maybe in 2001.

Back then, companies like Forbes.com, which I moved to after TheStreet.com, invested a crazy amount of money in hardware infrastructure. There was still a lot of proprietary software involved in the core of a web site – expensive CMS systems, etc. Open source was around, yes – we ran Apache, but it isn’t like it is now. You likely paid a hefty sum of money for a large portion of your production stack. Around 2001 and 2002, a small group of people were starting to focus on speed, and the way you achieved speed at scale back then? Drop a few million on a couple of big Sun servers. It worked. It seems old-fashioned now, but as a developer I’d work with the operations team (then as now, the operations team didn’t know much about Java), and you’d help them size heaps and figure out how to make the best use of 64 GB of RAM on a E450. You’d run multiple JVMs on the thing, someone might install something like Squid to cache things at the web layer.

Back then, you could touch the servers. They were shipped to your offices. Companies like Sun and SGI invested a lot of money to make servers look all fancy. These things were purple they had blue LEDs (remember high-brightness blue LEDs were, at one point, really new to us all). I remember seeing advertisements for servers in programming magazines. Now if you look back at these, it’s as strange as seeing an advertisement for a station wagon in a National Geographic from 1985. These days, I don’t even know who makes the hardware that runs the sites I work on, and with the exception of the database server, I don’t even care if it is even a physical machine. Back then it was like everybody getting all excited about the E4500 that was in the foosball room.

There was no memcache, there was no AJAX, there was no AngularJS, there was no REST, SOAP was new and you probably didn’t use it yet, there was no Google Analytics, remember, Google was still a tiny startup at the time. I remember having a discussion about Ruby in 2001 with a colleague who was excited by it, but Rails didn’t exist yet. Perl and PHP were around, they’ve been around forever, but you really weren’t confronted with systems that spanned several languages. Javascript was around, but you probably weren’t going to dare use it because it wasn’t like there were any standards between browsers. HTML5, huh? This was back when people still used the blink tag. Need to crunch a huge amount of data: well first of all, huge is relative and you didn’t have things like Hadoop. Just didn’t exist yet. Big Data? Yeah, back then if you had 5-10 GBs you were running a huge database. Huge. XML was still a really good idea. Flash was about to really take off.

If we could travel back in time and snatch a web developer from 2001 and drop them into 2013, they’d flip out. They’d look at your production network and wonder what happened. We’d have to tell them things like, “you’ve missed a bunch of things, this kid at Harvard created a system to keep track of friends in PHP and that changed everything. Google now runs the world. Also, the site ‘Dancing Hamster’ isn’t what it used to be.”

I look at people that started working in 2007 or 2008 and I think about how strange it is that none of this is new – because I’m still living in 1993. I’m still amazed at the functionality of Gopher in 1993.

And you can thank Mark Smith for this YouTube video…

Getting Off the Analytics Treadmill

Years ago, I had the idea that I should put Google Analytics on my own web site.  You know, why not track the readership, find out why people show up, track top refers, maybe even define a couple of conversion goals.   At the time, maybe it made more sense than it does today.  I had this open source book that I had made available, it got a lot of traffic, and I was thinking about trying to convert readers into newsletter signups. Whatever. My plans were nebulous, and, predictably, those plans were put aside for paying client work, a couple of kids, and life in general.

These days, the idea of tracking my organic vs. direct vs. refer traffic and locating the top metropolitan areas of my blog’s audience just seems silly so I got rid of it.  I turned off analytics, and now I’m realizing that there is one less dial to check.  One less meaningless number to pay attention to.  One less game to play every day when I’m looking for ways to waste time.  Here’s what freedom looks like.

freedom-from-analytics

After a week of this, I’m finding it easier to write.  I’m not tempted to go and dive into these meaningless traffic patterns.  Like some distracted data scientist, asking myself: “What is it about Parisians that attracts them to my complaints about Maven?”   No, I’ll write what I write, and if that garners an audience, great.  If it doesn’t, great. In some ways, who cares? This new idea is writing for writing’s sake – I’m not selling advertising, I’m not paying myself to write this blog – but, then it occurs to me…

…why not do the same for the businesses I help with blogging?  Why not turn off analytics for a month (or maybe two)?  Take a radical approach of just doing interesting things.  Produce content and don’t focus on bounce rate or returning vs. new browsers, just do it.  If someone asks about lead generation form conversions, laugh at them and say, “not my job.” Here’s the thing, I’ve worked for companies that have had amazing growth in traffic. (That Maven book had millions of unique viewers.)  I’ve been responsible for that growth over 24-36 months, and it didn’t correlate to us doing interesting things or even generating revenue. You can make traffic go up and you can make people like you and get high off of your Analytics graphs, but it’s really so worthless.  And, there’s so many graphs to look at you will always find one that is going up.  I’m starting to wonder if analytics is just a silly distraction.

What I’m wondering after this personal experiment of turning off statistics is if: analytics, marketing automation, conversion tracking, ad words… all of this is just detracting from what should be the Prime Directive for a technology startup (or any startup).  Connect with some customers, do what they want you to do efficiently, and iterate on what works.  The only statistic that really matters is revenue, so what I’m contemplating is just turning off (or maybe more accurately not paying attention to) analytics not only for a personal blog but for a business blog as well.

At the end of that day, if you need some silly HTML counter to tell you if your ideas are working, you are not going to succeed. If you are deciding what to do based on a focus group or a poll, you should quit now.

(Some disclaimers.  There’s still a bit of statistics gathering on WordPress.  Wordpress tells me how many readers I get, but it isn’t something I give more than a cursory glance.   (In fact, I wonder if there’s an option to turn it off.  I’m tempted.))

Sir, bad news, your backups have mutated into evil robots…

Clearly, I think this DNA storage thing is made of awesome. TLDR – researchers are starting to write “bitstreams” to DNA which has a theoretical max capacity of “All the world’s data” in about 3 grams of single-strange DNA. I also think that Google is going to be the one that takes this technology and runs with it, but there’s a darker side…

When I told my wife about this yesterday, her first reaction was, “You know this is *exactly* how we mistakenly create a half-human army of evil robots”. And, she may have a point, when I eventually archive my photo library to DNA storage in 2018 I will be juggling with the ingredients of life, but it’s a big step from that sequence to a sequence for an organism. Although, if we can do this, it’ll be interesting to see what (bio-)hackers of the future come up with.

Man, is this industry calling out to be regulated by an overzealous Congress yesterday or what?

Also the other thing Dr. Church talks about is Synthetic Biology. In other words, eventually getting to the point where you can write a bitstream for a real organism (oversimplification I know). That’s *exactly* the sort of stuff that will make all the neo-vegan organic maniacs in my life go crazy about Science overstepping its bounds. But, I say, bring on the evil robots, I like Science.

Misgivings about Personal Genomics

At Scifoo, I sat in on the session about a handheld genetic sequencing device (hypothetical, but not improbable). The session was hosted by Dr. Fire and was attended by some of the most important voices in genetic research and sequencing. I’m still ruminating on some of the ideas expressed, but my general sense is that once we get sequencing down to the range of 50 cents a pop, we’re entering into an age where one’s genetic data becomes instantly accessible. Coupling a handheld genetic sequencing device with a wireless connection to the internet and a service capable of correlating a given sequence with aggregate data from other users… well, that’s starting to look probable. Most of us look at this and see the potential that clinicians will have instant access to a medical history (and propensity) far more accurate than we have today.

I tend to view such a device as a mixed bag. While a handheld genetic sequencing device could be used by a clinician, it is also very likely it will find its way into the hands of a tyrant, government, or agency focused on using genetic information to purify and “mitigate” health costs. We’re entering into an age where everyone is going to know what diseases they are prone to and what they should be watching out for. Fast forward a few hundred years, and I worry that we’ll live in a world that uses this information to impose a sort of selection, a bias. I worry about a world in which this data is out of our hands and as influential on our lives as our credit score. Call me a tin-foil hat conspiracy theorist, but I’m a pessimist when it comes to personal genomics.

Blah.

In other news too.blogspot.com will likely be the most subscribed blog in the history of blogging as Sergey Brin starts to discuss his life outside of work.