A couple of days ago David Gelernter – a known Computer Science Visionary who famously survived an attack by the Unabomber – wrote a piece on Wired called ‘The End of the Web, Search, and Computer as We Know It’. In it, he summarized one of his predictions around the web moving from a static document oriented web to a network of streams. Nova Spivack, my Co-founder and CEO at Bottlenose, also wrote about this in more depth in his blog series about The Stream.
I’ve been interested in the work of David Gelernter for quite some time and thought this might be a good time to revisit some of his previous predictions. In 1999 he wrote a piece on Edge called ‘The Second Coming – A Manifesto’. While there are many pie in the sky things in there, I found some key takeaways that are highly relevant today:
18. But the Net will change radically before it dies. When you deal with a remote web site, you largely bypass the power of your desktop in favor of the far-off power of a web server. Using your powerful desktop computer as a mere channel to reach web sites, reaching through and beyond it instead of using it, is like renting a Hyundai and keeing your Porsche in the garage. Like executing programs out of disk storage instead of main memory and cache. The Web makes the desktop impotent.
19. The power of desktop machines is a magnet that will reverse today’s “everything onto the Web!” trend. Desktop power will inevitably drag information out of remote servers onto desktops.
20. If a million people use a Web site simultaneously, doesn’t that mean that we must have a heavy-duty remote server to keep them all happy? No; we could move the site onto a million desktops and use the internet for coordination. The “site” is like a military unit in the field, the general moving with his troops (or like a hockey team in constant swarming motion). (We used essentially this technique to build the first tuple space implementations. They seemed to depend on a shared server, but the server was an illusion; there was no server, just a swarm of clients.) Could Amazon.com be an itinerant horde instead of a fixed Central Command Post? Yes.
In order to make software and apps more intelligent, there is a vast amount of computation that needs to be done. Moving things into the Cloud will only get you linear improvements to computational scale. Meanwhile, there is a huge untapped potential sitting on the devices of the people interacting with the system. Why do Cloud Computing when you can do Crowd Computing?
At my company we send raw social media messages down to the browser, the browser then does natural language processing, semantic analysis and finally our StreamSense algorithms to discover trends in the stream. Once that’s done, the web browser will submit back results to our servers and the analysis is stored into our central analytics index. This is all happening seamlessly to the user of course.
The move to Crowd Computing is a very natural progression of the web due to the lack of bandwidth in proportion to the available computational and storage power. This global network of Apps and Cloud servers has formed the backbone of today’s global digital infrastructure. The natural next step towards Crowd Computing is an improved ability for Apps to utilize their underlying hardware and an increased connectivity with each other (By means of P2P or through mediator nodes in the Cloud).
28. Metaphors have a profound effect on computing: the file-cabinet metaphor traps us in a “passive” instead of “active” view of information management that is fundamentally wrong for computers.
30. If you have three pet dogs, give them names. If you have 10,000 head of cattle, don’t bother. Nowadays the idea of giving a name to every file on your computer is ridiculous.
Not only is this true for the ‘file’ metaphor, it is even more true for the ‘page’ metaphor. Ultimately all these metaphors are based on physical objects, the problem is that in the digital world these objects behave in the opposite way. Physical objects decay when you touch them, but digital objects multiply when touched.
Some of today’s metaphors however do a much better job at reflecting the abundant and infinite nature of the web: Streams, Filters, Attention, Channels, Contexts, Connections, etc.
The Synaptic Web
41. You manage a lifestream using two basic controls, put and focus, which correspond roughly to acquiring a new memory and remembering an old one.
43. A substream (for example the “Fifth Avenue” substream) is like a conventional directory — except that it builds itself, automatically; it traps new documents as they arrive; one document can be in many substreams; and a substream has the same structure as the main stream — a past, present and future; steady flow.
The act of inserting a small piece of content into the stream – for example a tweet or a like – can be compared to a neuron being fired in the brain. The signal is then broadcasted to about 1000 other neurons through synapses (connections). These neurons may then choose to fire to their connected neurons and so on. This is similar to the way messages travel through social media networks. And of course after your friend has posted LOLCAT number 300 you might decide to re-arrange some of your synaptic connections.
Querying the stream is like recalling a memory, although our ability to query the stream is quite primitive. Querying ‘videos of cats playing the piano’ is probably far from higher-level reasoning.
Neurons in the Human Brain (image credit: Riken Institute)
The human brain holds about 100 billion neurons, of which each has about 1000 synapses. Each neuron can fire between 0 and 200 times a second which makes the total potential throughput of the brain mind boggling. In non-parallel computer terms the ‘clock speed’ of this CPU would be 20 million GHZ which is about a million times faster than the CPU in my laptop. This total capacity is however never utilized all it once (you would in fact die), instead there are periods of high intensity bursts (in which neurons seem to behave according to mob mentality) and periods of low activity, for example when sleeping.
The web is no longer about documents and pages. The Semantic Web as envisioned by many in the last two decades is inherently flawed. The web is not about knowledge, facts or data. The web is about people, activity and connections. This emerging Synaptic Web is the beginning of a planetary intelligence. The Synaptic Web is our collective stream of consciousness, although the global brain itself is far from conscious yet. It cannot reason or think yet. The global brain is still dorment, like a baby in a womb, slowly developing its neocortex and silently observing us…
Until it wakes up…