Web 9.75

The precision of naming takes away from the uniqueness of seeing. Pierre Bonnard.

Nick Carr comments on Google’s Web 3.0, pointing out the fact that Web 3.0 was supposed to be about the Semantic Web, or, as he puts it, the first step in the Machine’s Grand Plan to take over.

For all the numbers we flash about there really are only so many variations of data, data annotation, data access, data persistence, and whatever version of “web” features the same concepts, rearranged. Perhaps instead of numbers, we should use descriptive terminology when naming each successive generation of the web, starting with the architecture of the webs.

Application Architectures

thin client

This type of application is old. Older than dirt. A thin client is nothing more than an access point to a server, typically managing protocols but not storing data or installing application locally. All the terminal traditionally does is capture keystrokes and pass them along to the server-based application. The old mainframe applications were, and many still are, thin clients.

There was a variation of thin client a while back when the web was really getting hot: the network computer. Oracle did not live up to its name when it invested in this functionality, long ago. The network computer was a machine that was created to access the internet and serve up pages. In a way, it’s very similar to what we have with the iPhone and other hand held devices. There is no way to add third-party functionality to the interface device, and any functionality, at all, comes in through the network.

Is a web application a thin client? Well, yes and no. For something like an iPhone or Apple TV, I would say yes, it is a thin client. For most uses, though, web applications require browses and plug-ins and extensions, all of which do something unique, and require storage and the ability to add third-party applications, as well as processing functionality on the client. I would say that a web application where most of the processing is done on the server, little or none in the browser, is a thin client. However, beyond that, then the web application would be…

client/server

A client/server application typically has one server or group of servers managed as one, and many clients. The client could be a ‘thin’ client, but when we talk about client/server, we usually mean that there is an application, perhaps even a large application, on the client.

In a client/server application, the data is traditionally stored and managed on the server, while much of the business processing as well as user interface is managed on the client. This isn’t a hard and fast separation, as data can be cached on the client, temporarily, in order to increase performance or work offline. Updates, though, typically have to be made, at some point, back to the server.

The newest incarnation of web applications, the Rich Internet Applications (RIA), are, in my opinion, a variation of client/server applications. The only difference between these and application that have been built with something like Visual Basic is that we’re using the same technologies we use to build more traditional web applications. We may or may not move the application out of the browser, but the infrastructure is still the same: client/server.

However, where RIA applications may differ from the more traditional web applications is that RIA apps could be a variation of client/server–a three tier client/server application…

n-tier

In a three, or more properly n-tier client/server application, there is separation between the user interface and the business logic, and the business logic and the data, creating three levels of control rather than two. The reasoning behind this is so that changes in the interface between the business layer and the data don’t necessarily impact on the UI, and vice versa. To match the architecture, the UI can be on one machine, the business logic on a second, and the data on a third, though the latter isn’t a requirement.

Some RIA applications can fit this model, because many do incorporate a concept of a middleware component. As an example, the newer Flex infrastructure can be built as a three-tier with the addition of a Flex server.

Some web applications, whether RIA or not, can also make use of another variation of client/server…

distributed client/server

Traditional client/server: many clients working against one set of business logic mapped to database server running serially. Easiest type of application to create, but one less likely to be able to scale, and from this arises the concept of a distributed client/server or computing architecture.

The ‘distributed’ in this title comes from the fact that the application functionality can be split into multiple objects, each operating on possibly different machines at the same time. It’s the parallel nature of the application that tends to set this type of architecture apart, and which allows it to more easily scale.

J2EE applications fit the distributed computing environment, as does anything running CORBA or the older COM and the newer .NET. It is not a trivial architecture, and needs the support of infrastructure components such as WebLogic or JBoss.

This ‘distributed parallel’ functionality sounds much like today’s widget-bound sidebars, wherein a web page can have many widgets, each performing a small amount of functionality on a specific piece of data at the same time (or as parallel as can be considering that the the page is probably not running in a true multi-threaded space).

Remember, though, that widgets tend to operate as separate and individual applications, each to their own API (Application Programming Interface) and data. Now, if all the widgets were front ends to backend processes running in parallel, and working together to solve a problem, then the distributed architecture shoe fits.

There’s a variation of distributed computing–well, sort of –which is…

Service Oriented Applications

Service Oriented Applications (SOA). Better known as ‘web services’. This is the APIs and the RESTful service requests, and other services that run the web we seem to become more dependent on every day. Web services are created completely independent of the clients, supporting a specific protocol and interface that makes the web services accessible regardless of the characteristics of the client.

The client then invokes these services, sending data, getting data back, and does so without having any idea of how the web services are developed or what language they’re developed with, other than knowing the prototype and the service.

Clean and elegant, and is increasingly running today’s web. The interesting thing about web services is that they can be almost trivially easy to tortuously complex to implement. And no, I didn’t specifically mention the WS-* stack.

Of course, all things being equal, no simpler architecture than…

A stand alone application

A stand alone application is one where no external service is necessary for accessing data or processes. Think of something like Photoshop, and you get a stand alone application.

The application may have internet capabilities but typically these are incidental. In addition, the data may not always be on the same machine, but it doesn’t matter. For instance, I run Photoshop on one Mac, but many of my images are on another Mac that I’ve connected through networking. However, though I may be accessing the data on the ‘net, the application treats the data as if it is local.

The key characteristic of a stand alone application is that you can’t split the application up across machines — it’s an all or nothing. It’s also the only architecture that can’t ‘speak’ web, so we can’t look for the Web 3.0 among the stand alones.

Alone again, naturally…

No joy in being alone; what we need is a little help from our friends.

P2P

P2P, or peer-to-peer applications are built in such a way that once multiple peers have discovered each other through some intermediary they communicate directly–sharing either process, data, or both. A client can become a server and a server can become a client.

Joost is an example of a P2P application, as is BitTorrent. There is no centralized place for data, and the same piece of data is typically duplicated across a network. Using a P2P application, I may get data from one site, which is then stored locally on my machine. Another person logging on to the P2P network can then get that same piece of data from me.

The power to this environment is it can really scale. No one machine is burdened with all data requests, and a resource can be downloaded from many resources rather than just one. It is not a trivial application, though, and requires careful management to ensure that any one participant’s machine isn’t made overly vulnerable to hacking, that downloads are complete, that data doesn’t get released into the wild, and so on. Communication and network management is a critical aspect to a P2P application.

These are the architectures, at least, the ones I can think of off the top of my head. Which, then, becomes the ‘next’ Web, the Web 3.0 we seem to be reaching for?

Web 3.0

Ew Ew Ew! The next generation of the web must be Google’s cloud thing, right. So that makes Web 3.0 a P2P application, and we call it “Google’s P2P Web” or “MyData2MyData”?

Ah, no.

The concept of ‘cloud’ is from P2P (*correct?). It is a lyrical description of how data is seen to a P2P application…coming from a cloud. When we make requests for a specific file, we don’t know the exact location of where the file is pulled; chances are, it’s coming from multiple machines. We don’t see all of this, though, hence the term ‘cloud’. Personally, I prefer void, but that’s just semantics.

The term cloud has been adopted for other uses. Clouds are used with ‘tags’ to describe keyword searches, the size of the word denoting the number of requests. I read once where a writer called the entire internet a cloud, which seems too generic to be useful. Dare Obasanjo wrote recently on the discussions surrounding OS clouds, which, frankly, don’t make any sense at all and, me thinks, using cloud in the poetic sense: artful rather than factual.

The use of ‘cloud’ also occurs with SOA, which probably explains Google’s use of the term. And Microsoft’s. And Apple, if they wanted, but they didn’t–being Apple (Stickers on our machines? We don’t need no stinking stickers!) Is the next web then called, “BigCo SOA P2P Web”?

Let’s return to Google CEO Schmidt’s use of the cloud, as copied from Carr’s post, mentioned earlier:

My prediction would be that Web 3.0 would ultimately be seen as applications that are pieced together [and that share] a number of characteristics: the applications are relatively small; the data is in the cloud; the applications can run on any device – PC or mobile phone; the applications are very fast and they’re very customizable; and furthermore the applications are distributed essentially virally, literally by social networks, by email. You won’t go to the store and purchase them. … That’s a very different application model than we’ve ever seen in computing … and likely to be very, very large. There’s low barriers to entry. The new generation of tools being announced today by Google and other companies make it relatively easy to do. [It] solves a lot of problems, and it works everywhere.

With today’s announcement of Google shared space we’re assuming that Google thinks of third-party storage as ‘cloud’, similar to Microsoft with its Live SkyDrive or Apple with its .mac. It’s the concept of putting either data or processes out on third party systems so that we don’t have to store on our local machines or lease server space to manage such on our own.

In Google’s view, Web 3.0 is more than ‘just’ the architecture: it’s small, fast applications built on an existing infrastructure (think Mozilla, Silverlight, Flex, etc.) that can run locally or remotely; on phones, hand helds, and/or desk sized or laptop computers; store data locally and remotely; built on web services run on one or many machines, created by one or more than one company. I guess we could call Google’s web, the Small, Fast, Device Independent, Remote Storage, SOA P2P Web, which I will admit would not fit easily on a button, nor look all that great with ‘beta’ stuck to its ass.

Not to mention that it doesn’t incorporate all that neat ‘social viral’ stuff. (I knew I forgot something.)

The social viral stuff

Whatever makes people think that Facebook or MySpace or any of the like is ‘new’? Since the very first release of the internet we’ve had sites that have enabled social gathering of one form or another. The only thing the newer forms of technology provide is a place where one can hang one’s hat without having to have one’s own server or domain. That’s not ‘social’–that’s positional.

Google mentions how we won’t be buying software at the store. I had to check the date on the talk, because we’ve been ‘spreading’ software through social contact for years. Look in the Usenet groups and you’ll see recommendations for software or links to download applications. Outside of an operating system and a couple of major applications, I imagine most of us download our software now.

What Google’s Schmidt is talking about isn’t downloaded software so much as software that has a small installation footprint or doesn’t even need to be installed at all. Like, um, just like the software it provides. (Question: What is Web 3.0? Answer: What we’re selling.)

Anyone who has ported applications is aware of what a pain this is, but the idea of a ‘platformless’ application has been around as long as Java has been around, which is longer than Google. It’s an attractive concept, but the problem is you’re more or less tied into the company, and that tends to wear the shininess off ‘this’ version of the web–not to mention all that ‘not knowing exactly what Google is recording about us, as we use the applications’ thing that keeps coming up in the minds of we paranoid few.

Is the next web then, the Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary Web? God, I hope not.

Though Schmidt’s bits and cloudy pieces are a newer arrangement of technology, the underlying technology and the architectures have been around some time: the only thing that really differs is the business model, not the tech. In this case, then, ‘cloud’ is more marketing than making. Though the data could end up on multiple sites, hosted through many companies, the Google cloud lacks both the flexibility and freedom of the P2P cloud, because at the heart of the cloud is…Google. I’ve said it before in the past and will repeat: you can’t really have a cloud with a solid iron core.

Though ‘cloud’ is used about as frequently as lipstick at a prom, I don’t see the next generation of the web being based on either Google’s cloud, or Microsoft. Or Adobe’s or Mozilla’s or Amazon’s or any single organization.

If Google’s Web 3.0, or, more properly, Small, Fast, Device Independent, Remote Storage, SOA P2P, Proprietary, Web with an Iron Butt, is a bust, does this mean, then, that the Semantic Web is the true Web 3.0 after all?

Semantic Web Clouds…and stuff

Trying on for size: a Semantic Client/Server Web. Nope. Nope, nope, nope. Doesn’t work. There is no such thing as a semantic client/server. Or a semantic thin client, or even distributed semantics, or SOA RDF, though this one comes closest, while managing to sound like something that belongs on a Boyscout badge.

Semantics on the web is basically about metadata–data about data. Our semantic efforts are focused on how metadata is recorded and made accessible. Metadata can be recorded or provided as RDF, embedded in a web page as microformat, or even found within the blank spaces of an image.

We all like metadata. Metadata makes for smarter searches, more effective categorization, better applications, findability. If data is one dimension of the web, then metadata is another, equally important.

The semantic web means many things, but “semantic web” is not an application architecture, or profoundly new way of doing business. Saying Web 3.0 is the Semantic Web implies that we’ve never been interested in metadata in the past, or have been waiting some kind of solar congruence to bring together the technology needed.

We’ve been working with metadata since day one. We’ve always been interested in getting more information about the stuff we find online. The only difference now from the good old days of web 1.0 is we have more opportunities, more approaches, more people are interested, and we’re getting better when it comes to collecting and using the metadata. Then again, we’re also getting better with just the plain data, too.

Web 3.0 isn’t Google’s cloud and it isn’t the Semantic Web and it certainly isn’ t the Small, Fast, Device Independent, Remote Storage, Viral, SOA, P2P, Proprietary, Smart Web with an Iron Butt. Heck, even Web 3.0 isn’t Web 3.0. So what is the next great Web, and what the devil are we supposed to call it?

Web 9.75

It is a proprietary thing, this insistence on naming things. “From antiquity, people have recognized the connection between naming and power”, Casey Miller and Kate Swift wrote.

We can talk about Web 1.0, or 2.0, or 3.0, but my favorite is Web 9.75, or Web Nine and Three-Quarters. It reminds me of the train platform in the Harry Potter books, which could only be found by wizards. In other words, only found by the people who need it, while the rest of the world thinks it’s rubbish.

There are as many webs as there are possible combinations of all technologies. Then again, there are many webs as people who access them, because we all have our own view of what we want the web to be. Thinking of the web this way keeps it a marvelously fluid and ever changing platform from which to leap unknowing and unseeing.

When we name the web, however, give it numbers and constrain it about with rigid descriptions and manufactured requirements, then we really are putting the iron into the cloud; clipping our wings, forcing our feet down paths of others’ making. That’s not the way to open doors to innovation; that’s just the way to sell more seats to a conference.

Instead, when someone asks you what the next Web is going to be, answer Web 9.75. Then, when we hear it, we’ll all nudge each other, wink and giggle because we know it’s nonsense, but no more nonsense then Web 1.0, Web 2.0, Web 3.0 or even Google’s Web-That-Must-Not-Be-Named.

*As reminded in comments, network folks initially used ‘cloud’ to refer to that section of the network labeled “…and then a miracle happens…”

This entry was posted in Technology and tagged . Bookmark the permalink.

12 Responses to Web 9.75

  1. Arthur says:

    (that pretty much sums it up from a developer’s perspective. Great job, Shelly)

  2. Eric Norman says:

    Thin client: a teletype that can draw picures.

  3. Pingback: On Web Versions | iface thoughts

  4. Julian Bond says:

    I’m getting along just fine with Web 2.01 SP2. I see no reason to upgrade.

    Whatever happened to “Decentralisation”? Not just that we’d have lots of P2P Client-Client apps, but that when we needed the Client-Server pattern, we’d all run our own servers. And lots of people would run aggregator services that added value to our P2P applications. Oh, right. GYAM bought our favourite aggregator and turned it into a dinosaur.

  5. Kevin Marks says:

    I think you’re being a bit harsh on Eric; the cloud has been the symbol for IP networking as long as I can remember. Network engineers use it to mean that routing has a Someone Else’s Problem field around it. This isn’t just a dead white male thing, I know lots of women who agree about it.
    My generation draws the Internet as a cloud that connects everyone; the younger generation experiences it as oxygen that supports their digital lives. The old generation sees this as a poisonous gas that has leaked out of their pipes, and they want to seal it up again.
    Cue up The Orb’s Little Fluffy Clouds…

  6. trevor says:

    You may have competition.

    http://www.mkbergman.com//?p=248

  7. the semantic web still needs the usability, customer orientation, and problem-solving ability of web 2.0 applications, especially we need people that get dragged from science into vc to make money out of grey matter. The web 2.0 needs innovative ideas, because at the moment, all gray matter there is pointed towards “how to makethe nth desktop application into 2mio lines of undebuggable javascript code”, which is sad. Even worse, the API problem: once a Nth flickr alternative is out there, how to guarantee they all got the same API? Ontologies and updateable SPARQL together with some basic web stuff (webdav & RDF come to mind) are the way to go, but who will walk this way?

    btw: The real term for this is Semantic Web 2.0 :-)
    my old rantings: http://leobard.twoday.net/stories/3520709/

  8. Karl says:

    This post aptly shows why you should do the conference circuit here and there. I hear and see an awesome presentation in this.

    An aside, like Kevin, I tend to use the word cloud to describe the Internet as a whole that connects everyone.

  9. Sign me up. Web 9.75 is technology so far ahead of everybody else that the marketers haven’t arrived yet. The payoff is so far into the future that we don’t even know if people will still be using money then.

  10. Aruni says:

    I’m still trying to figure out how Web 2.0 is different than the ASP model espoused so many years ago (before the bubble burst!). At the first company I founded, we used the ASP model to deliver our web apps to customers…now in my second we call it Web 2.0.

    A rose by any other name is still the same….

  11. Aruni, the main difference is that the ASP approach is essentially “poof! instant silo!”. This is now largely known as ‘Software As A Service’. Salesforce.com comes to mind as the most prominent exemplar.

    Web 2.0 is essentially “hey, jump into the pool, the water’s fine! You can add your own water to everybody else’s to make the pool deeper! Why no, I hadn’t noticed that it was kind of yellow… what’s your point?”

    OK, I kid. But the basic approach of piling all users’ data together by default (modulo optional access and privacy controls) like, say the exemplars Flickr.com and del.icio.us, is rather antithetical for most ASPs, many of which need to go so far as to actually provision separate hardware to ensure one customer’s data is completely isolated from another’s for any of a variety of reasons (ranging from regulatory to paranoia).

    There is some crossover with ASP provisioning of private-label Web 2.0 apps, but I haven’t heard of any huge successes with that approach so far.

  12. John Dowdell says:

    “Though ‘cloud’ is used about as frequently as lipstick at a prom, I don’t see the next generation of the web being based on either Google’s cloud, or Microsoft. Or Adobe’s or Mozilla’s or Amazon’s or any single organization.”

    For what it’s worth, Adobe network storage tends to be about specific Adobe-oriented things (internal like serialization; external like Kuler colors), rather than about building a general personalization database for advertising revenues a la Google, Microsoft, Yahoo.

    Mozilla seems to be in a similar type of restrained storage position. Amazon is a unique case.

    jd/adobe