History

A brief overview of some of the aspects of the histories of information, words, computers, hypertext, the internet & the web and blogging

INFORMATION

This section traces information from the beginning through to how we have transformed it and given information new dimensions first by language, then by writing, early computers, networked computers and beyond. It is by no means intended to be anything but an overview to highlight what I feel are the salient periods.

the first information

You could say the first information was binary. Before anything else could be determined or described about the universe, before there was a potential for abstraction or even interaction, there was only a universe which could proclaim: "I am." Before the big bang, there was nothing- nothing in terms of our current universe anyway. Immediately as the big bang happened, there was something: Off. Then on. Without any further elaboration.
Since then the universe has blossomed from a soup of elementary particles to unimaginably many rich worlds with living creatures - us, endowed with the capability to invent new worlds at will in the dimension of our minds, culture and tools.
Since then information has grown in scale and complexity, much like the physical world which embodies and represents information has grown from physics to chemistry to biology and so on. The stages are not from reality to abstraction, but to different levels of reality depending on scale. So has information moved, from elementary interactions at quantum scales to the macroscopic scales we inhabit to the worlds our minds and imaginations give rise to.
Information and the physical world are often treated as mind and brain, where information is an abstraction of the physical world and exists in a separate world from the physical world the information represents. As the tired argument of a mind outside the brain, the Descartes duality.
This way of looking at information I will dismiss simply as a matter of scale.
The very same principles which separates physics from chemistry and a single car from city road planning are the ones of scale which gives rise to mind from matter and information from interactions.

information's natural state

Information's natural state used to be one of motion, of activity. Information is generated by interactions, information is interaction, as without comparison, without a context, without interaction, there is nothing.
There is no temperature with only hot. There is no darkness without light. In quantum physics a particle has no identity until it meets another and its superposed wave function collapses.
It is like that with all information. Information cannot exist without context.
Information is not an abstraction, it is nothing separate from the physical world as it is often discussed as being.
And for those snoozing through this paper thinking it is way to esoteric and over analytical for its own good, it is important to point out that information has to be useful to exist. Not that it should be useful, but that it has to be useful. Useful in a specific context. If it is not, it either does not exist, or it becomes negatively useful. As in noise.
The first information was incidental. A cratered moon records the history of the impacts but it does so by accident, as a by-product of the events, not for the events.

life

Life appeared and information could be stored in useful form. Life is different from non-life, storing information about how to replicate something, how to replicate itself. Information now served a purpose, as before it had only existed as a record, an imprint. Not to say that these imprints, these records didn't have any effect.
Life by itself is pretty much a steam train chugging along expanding where conditions are ripe and faltering where they are not. Simple life just goes on and on as best it can.

consciousness & language

Consciousness & language are hard to separate. Language is dynamic. "Language: A system of conventional spoken or written symbols by means of which human beings, as members of a social group and participants in its culture, communicate." (Britannica.com 2003) Writing is not dynamic, which is pretty much the whole point of writing, locking information down and counting on it staying that way.

writing

Along came one of the greatest inventions of mankind; writing- the freezing of information.
Sumerian bookkeepers needed a way to keep track of agricultural goods. So they used tokens for simple bookkeeping purposes to the development of written tablets on which graphs of the script stand for morphemes of spoken Sumerian. These are thought to date back to as early as 8000 BC, about the time that hunter-gatherer societies were giving way to an agricultural way of life.
That was all well and fine for many uses, but many syllables were left out. The first known writing system consistently based on the sound structure of a language was Linear B, a Mycenaean Greek orthography developed around 1400 BC. Writing was becoming more like regular speech.
The final stage in the evolution of writing systems was the discovery of the alphabetic principle, the system of building the syllables from their consonantal and vowel sounds. According to the British linguist Geoffrey Sampson; "Most, and probably all, 'alphabetic' scripts derive from a single ancestor: the Semitic alphabet, created sometime in the 2nd millennium (BC)."(Britannica.com 2003) Even closer to speech.

But not quite there: Writing: "Form of human communication by means of a set of visible marks that are related, by convention, to some particular structural level of language. This definition highlights the fact that writing is in principle the representation of rather than a direct representation of thought..."
Britannica.com (2003)

Since writing doesn't have the precision or accuracy of objective representation, or even a verbatim representation of human perspective, not only is written language accurate at the time of writing, it only gets worse as time goes by and language moves on. The context slowly gets stretched away from the information until, at one point, it breaks down entirely and the writing can no longer be understood.
So information could be frozen but not in a perfect, complete form, as that would require freezing and including all the information's context, which would have to be all information everywhere, it would have to be everything, and if that was done the information would of course cease to exist.
Information could survive, with writing, in its incomplete form pretty much permanently, degraded only by its physical media and the outside world's ability to access it- altered through changes in language, customs and culture as well as physical accessibility. From the human perspective though, written information becomes for most practical purposes though, solid.

"In the Phaedrus, Plato argued that the new arrival of writing would revolutionise culture for the worse. He suggested that it would substitute reminiscence for thought and mechanical learning for the true dialect of the living quest for truth by discourse and conversation."
McLuhan, M. 1954, in McLuhan E. & Zingrone F. (1997 )

The only time it would thaw, was when someone would read it and reintroduce it into the dynamic environment of their minds.

WORDS

the spoken word

"Isocrates was a great speech teacher who believed that it is language which separates us from animals. He believes that there are three essentials for learning, natural ability, training and practice. This is where it gets interesting, he maintained that "learning to speak properly was tantamount to learning to think properly" “
McLuhan, M, 1957., in McLuhan E. & Zingrone F. (1997 )

"The spoken word was the first technology by which man was able to let go of his environment in order to grasp it in a new way."
McLuhan, M, 1995., in McLuhan E. & Zingrone F. (1997 )

the written word

"The alphabet was one thing when applied to clay or stone, and quite another when set down on light papyrus."
McLuhan in McLuhan E. & Zingrone F. (1997 )

the electronic word

"The new media are not bridges between man and nature; they are nature."
McLuhan, M, 1969., in McLuhan E. & Zingrone F. (1997 )

"The news automatically becomes the real world for the TV user and is not a substitute for reality, but is itself an immediate reality."
McLuhan, M, 1978., in McLuhan E. & Zingrone F. (1997 )

"Today we are beginning to notice that the new media are not just mechanical gimmicks for creating worlds of illusion, but new languages with new and unique powers of expression."
McLuhan, M, 1957., in McLuhan E. & Zingrone F. (1997 )

"New media may at first appear as mere codes of transmission for older achievement and established patterns of thought. But nobody could make the mistake of supposing that phonetic writing merely made it possible for the Greeks to set down in visual order what they had though and known before writing. In the same way printing made literature possible. It did not merely encode literature."
McLuhan in 1960, in McLuhan E. & Zingrone F. (1997 )

I get the picture.

But wow, things are really moving along, as outlined in The Information Explosion. We are seeing the beginning of a snowball effect based on technology which becomes 68 billion times more powerful in a single human lifetime - the microchip. And then double that again, only a year and a half later, according to Moore's law, powering an Internet which is due to become more extensive than the telephone network in 2002 if not earlier, doubling every 100 days (Interactive Week) adding users quicker than the worlds population is growing (7 new users a second whereas the worlds population increases by 3 people a second according to The Herald Tribune 2000).

What kind of world is developing here?

COMPUTERS

This thawing process changed again when the frozen information could be manipulated in chunks, with the advent of computers. Information got translated from the smooth, continuous, analogue world into chunky bits (binary digits -on/off). Complicated language with complicated meanings formerly stored and expressed in writing and speech also got chunked, with nothing more than zeros and ones to embody them.
But these lifeless bits were machine manipulatable, this is where the magic lies.
With George Boole's "algebra of logic"; 'boolean' logic, all mathematics could be be reduced to, and expressed in terms of sets with the notation x and y.
The concept Boole used to connect the two heretofore different thinking tools of logic and calculation was the idea of a mathematical system in which there were only two quantities, which he called "the Universe" and "Nothing" and denoted by the signs 1 and 0. Although he didn't know it at the time, Boole had invented a two-state (binary) system for quantifying logic that also happened to be a perfect method for analysing the logic of two-state physical devices like electrical relays or vacuum tubes.
The century ticked over from the 19th to the 20th.
And along came Alan Turing who at the age of twenty-four, when confronted with the problems of formally stating what is computable (Hilbert's Entscheidungsproblem) creating the theoretical basis for computation. He invented the concept of the general computer, also called the Turing Machine, which worked on a single continuous stream of input of binary, on/of, pieces of data. The machine would only be aware of the one symbol at a time which would enter, the machines current state and a set of rules or algorithms which had previously supplied to it.
Assembly line manipulation became possible. Databases could be organised and reorganised, yielding new information in its relationships at every turn. Programs and procedures could be devised with the confidence that the machines would tirelessly follow them to the letter. Impossibly boring manipulations with potentially exciting results became first promised, practical, then routine. The rush of building the logical machines had started. First big powerhouses, then 'personal computers' and no networked computers and other digital devices which computes and communicates with computers.
The computer had been set on a trajectory into the future, becoming steadily more capable and powerful. The issue started to become what could we get out of this? How could they help us learn, communicate and make decisions? How could they augment our intellect? Enter Douglas Engelbart who, as a radar operator during World War II had stared at a screen or two. He promptly set out to invent the mouse, windows, hypermedia(hypertext) and most of the rest of the human-computer-interfaces we still use today, putting more of the man in the machine and more of the power of the machine into the man. First demoed in 1968. Still to day we see only a trickle of what he invented.
The primitive PC's of the 80's and early 90's empowered the individual. Enabled us to do, well more. More of what was previously segregated into specialist fields. We could publish magazines from home! We could solve amazing equations, do our own complex financial planning! Design like there was no tomorrow. Write print quality letters. We became, in effect, our own secretaries.

"As technology advances, it reverses the characteristics of every situation again and again. The age of automation is going to be the age of "do it yourself"."
McLuhan in 1957, in McLuhan E. & Zingrone F. (1997 )


Computation

Speed matters and computers double in processing capacity every 18 months. That's Moore's Law, named after Gordon Moore (cofounder of Intel in 1968):

"In 1965, Gordon Moore was preparing a speech and made a memorable observation. When he started to graph data about the growth in memory chip performance, he realised there was a striking trend. Each new chip contained roughly twice as much capacity as its predecessor, and each chip was released within 18-24 months of the previous chip. If this trend continued, he reasoned, computing power would rise exponentially over relatively brief periods of time. Moore's observation, now known as Moore's Law, described a trend that has continued and is still remarkably accurate. It is the basis for many planners' performance forecasts. In 26 years the number of transistors on a chip has increased more than 3,200 times, from 2,300 on the 4004 in 1971 to 7.5 million on the Pentium II processor."
www.intel.com/intel/museum/25anniv/hof/moore.htm (2000)

Ray Kurzweil takes it further, stating that it has been this way for a lot longer than we first thought, this trend did not start when noticed, in the sixties.
To understand the the implications of this exponential growth, let's go back to 1984, the birth of Macintosh, a year dear to me, a year computers, and us, got liberated from the text based interface. Let's say we put one dollar in the bank back then. And let's say this bank gave interest in line with the speed of computers evolution. We are in 1999 (at time of writing anyway), that was 1984 so that's 15 years right? OK, that's seven and a half times our dollar has doubled in value. It started at $1 and in 24 months it was worth $2, then 4, 8, 16, 32, 64, 128, finally to be worth $256 at next count ($192 now).
So guess what kind of money we are looking at for the next couple of years? Current new machines costing a thousand US dollars or so have the processing power of an insect brain. In ten years we will be able to spend the same amount of money and get the processing power of a mouse brain (with a bank balance then at 8,192 dollars) We will be able to buy the equivalent processing power of a human brain in 2023 (with over half a million in the bank). In 2060 we will be able to get a machine with the processing power of all human brains. Our bank balance would be 137,216 million dollars at that point! When will it end? Every decade someone predicts the slow down of Moore law, but so far it's just kept on going.
This is important as we cannot afford to look at the last decades as a fluke and think that the evolution of computers has stabilised or that it soon will.

Seth Lloyd, a physicist based at MIT has studied how far Moores Law has left to go within the limits of our current understanding of science:

“People have been claiming the law is about to break down every decade since it was formulated,' Seth Lloyd, says. 'But they've all been wrong. I thought, let's see where Moore's law has to stop and can go no further.’ He wasn’t interested in the workings of the computer, just if it would be theoretically possible. He realised that the speed of a computer depends on the energy available. “The argument for this is rather subtle. A computer performs a logical operation by flipping a '0' to a '1' or vice versa. But there is a limit to how fast this can be done because of the need to change a physical state representing a '0' to a state representing a '1'.
Size matters too; small is definitively better: "In the quantum world any object, including a computer, is simply a packet of waves of various frequencies all superimposed. Frequency is linked to energy by Planck's constant, so if the wave packet has a wide range of energies, it is made up of a large range of different frequencies. As these waves interfere with one another, the overall amplitude can change very fast. On the other hand, a small energy spread means a narrow range of frequencies , and much slower change in state.” ... “In 1998, Norman Margolus and Lev Levtin of MIT calculated that the minimum time for a bit to flip is Planck's constant divided by four times the energy." "Lloyd had built on Margolus's work by considering a hypothetical 1-kilogram laptop. Then the maximum energy available is a quantity famously given by the formula E=mc2. If this mass-energy were turned into a form such as radiant energy, you'd have 1017 joules in photons, says Lloyd."
As for memory, the article jumps straight to Boltzmann's constant. "What limits memory? The short answer is entropy." "Entropy is intimately connected to information, because information needs disorder: a smooth, ordered system has almost no information content." "Entropy is linked to the number of distinguishable states a system can have by the equation inscribed on Boltsmann's headstone S=K ln W. Entropy (S) is the natural logarithm of the number of states (W) multiplied by Boltzmann's constant (k). Equally, to store a lot of information you need a lot of indistinguishable states." "To register one bit of information you need two states, one representing 'on,' the other 'off.' Similarly, 2 bits require 4 states and so on. In short, both the entropy of a system and its information content are proportional to the logarithm of both states."
(New Scientist 2000)

So we have a very fast, very dense computer with a lot of energy. About a billion degrees hot. Estimated time to build the ultimate computer within the realm of our current understanding of the laws of physics? Moore's Law has about 200 years left. According to our current understanding of science.

HYPERTEXT

A few important dates in the history of electronic hypertext:

• 1945 - Vannevar Bush proposes Memex in ‘As We May Think”, as well as alluding to it earlier.
• 1965 - Ted Nelson coins the word "hypertext".
• 1967 - Andy van Dam works on The Hypertext Editing System and FRESS, at Brown University,
• 1968 - Doug Engelbart demo of NLS system - the first demonstrated, working hypertext system. And I was born, two months after the demo.
• 1978 - Aspen Movie Map premieres, the first hypermedia videodisk, built by Andy Lippman at the MIT Architecture Machine Group (now Media Lab).
• 1984 - Apple introduce the Mac and all is well and elegant. A new category of users can have access to the power of computers. Which was great, but ease-of-use took precedence over ‘augmentation’, which provided us with easy buttons to click in Microsoft ‘Word’, but not much power to manipulate our work in a more fluid and dynamic manner, as we could with Doug Engelbart’s NLS/Augment for example. But I digress.
• 1987 - Apple introduces ‘HyperCard’, written by Bill Atkinson. Hypertext for the masses. A simple and incomplete incomplete implementation, but hugely successful.

Back to the beginning:

memex

Vannevar Bush wrote of a hypothetical information access device. He was the chief scientist in the US during the second world war and had thus been in charge of organizing thousands of scientists towards goals to end the war, including the Manhattan Project.

In 1945 he published an article titled 'As We May Think', in the Atlantic Monthly (Bush 1945), where he first asks: "What are the scientists to do next?" He goes on to discuss how the record of human knowledge is increasing rapidly, however; "Thus far we seem to be worse off than before - for we can enormously extend the record; yet even in its present bulk we can hardly consult it."

Are better libraries the answer? No, he doesn't think it's that simple: "The real heart of the matter of selection, however, goes deeper than a lag in the adoption of mechanisms by libraries, or a lack of development of devices for their use. Our ineptitude in getting at the record is largely caused by the artificiality of systems of indexing." Indexing he feels, is not a natural human way to refer to information; "When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path."

The human mind by contrast; "does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain."

Does he stop there, just complaining? No, Bush offers a way forward: "Selection by association, rather than by indexing, may yet be mechanized. One cannot hope thus to equal the speed and flexibility with which the mind follows an associative trail, but it should be possible to beat the mind decisively in regard to the permanence and clarity of the items resurrected from storage. Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, "memex" will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory."

He describes what it would look like: "It consists of a desk, and while it can presumably be operated from a distance, it is primarily the piece of furniture at which he works. On the top are slanting translucent screens, on which material can be projected for convenient reading. There is a keyboard, and sets of buttons and levers. Otherwise it looks like an ordinary desk."

And how it would function, it would use microfilm: "In one end is the stored material. The matter of bulk is well taken care of by improved microfilm. Only a small part of the interior of the memex is devoted to storage, the rest to mechanism. Yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so he can be profligate and enter material freely." Bush continues:

Most of the memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path. And there is provision for direct entry. On the top of the memex is a transparent plate. On this are placed longhand notes, photographs, memoranda, all sort of things. When one is in place, the depression of a lever causes it to be photographed onto the next blank space in a section of the memex film, dry photography being employed.

Ceding some usefulness to indexing for some functions, it is provided for and describes how the memex is to be operated: "There is, of course, provision for consultation of the record by the usual scheme of indexing. If the user wishes to consult a certain book, he taps its code on the keyboard, and the title page of the book promptly appears before him, projected onto one of his viewing positions. Frequently-used codes are mnemonic, so that he seldom consults his code book; but when he does, a single tap of a key projects it for his use. Moreover, he has supplemental levers. On deflecting one of these levers to the right he runs through the book before him, each page in turn being projected at a speed which just allows a recognizing glance at each. If he deflects it further to the right, he steps through the book 10 pages at a time; still further at 100 pages at a time. Deflection to the left gives him the same control backwards."

A special button transfers him immediately to the first page of the index. Any given book of his library can thus be called up and consulted with far greater facility than if it were taken from a shelf. As he has several projection positions, he can leave one item in position while he calls up another. He can add marginal notes and comments, taking advantage of one possible type of dry photography, and it could even be arranged so that he can do this by a stylus scheme, such as is now employed in the telautograph seen in railroad waiting rooms, just as though he had the physical page before him.

A central theme of the memex is the building of trails (links) and of being able to share them, annotate them and refer back to them: "All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing."

When the user is building a trail, he names it, inserts the name in his code book, and taps it out on his keyboard. Before him are the two items to be joined, projected onto adjacent viewing positions. At the bottom of each there are a number of blank code spaces, and a pointer is set to indicate one of these on each item. The user taps a single key, and the items are permanently joined. In each code space appears the code word. Out of view, but also in the code space, is inserted a set of dots for photocell viewing; and on each item these dots by their positions designate the index number of the other item.

Thereafter, at any time, when one of these items is in view, the other can be instantly recalled merely by tapping a button below the corresponding code space. Moreover, when numerous items have been thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting a lever like that used for turning the pages of a book. It is exactly as though the physical items had been gathered together to form a new book. It is more than this, for any item can be joined into numerous trails.


ted nelson

Hi Frode--
The computer is merely the way we carry it out. Could do with holograms, file cards on rubber bands, etc. My original definition was "nonessential writing," but that included programmed instruction, where users had no explicit choice, so I added "free user movement." That also rules out probabilistic and random stuff, which isn't writing.
Allbest,T
Nelson, T. email (29/09/03)

In 1960, in my second year of graduate school (studying sociology), I had a chance to take a course called 'Computers for the Social Sciences.' It was a good course, which I found thrilling, and as soon as I found out what computers really were-- All-Purpose Machines, as von Neuman had called them (but the press did not catch on to that term), I desperately wanted one (although no individual in the world owned a computer at that time).
The explosive moment came when I saw that you could hook graphical displays to computers. At once-- over a few weeks-- I saw that this would be the future of humanity: working at screens, able to read and write and publish from ever-expanding electronic repositories. This meant that whole new forms of writing, extending documents far beyond the walls of paper.
Nelson, T. (2002)

doug engelbart

Doug Engelbart's original system Augment (earlier referred to as NLS) takes advantage of two important principles of interactivity referred to as 'Jumping' and 'Addressing':

The jump command lets you tell the computer where you want to move to. A jump can be: 'jump up a level' Just like you would do when you go up a folder on your computer. It can be 'jump to glossary entry for this word' or it can be 'jump to the first phone number in the document'. Or 'jump to this prewritten address' (like we do when we click on a web link to day). You can jump to the top of a document. You can jump to the first occurrence of an indicated acronym (which is useful as acronyms are often only explained when they first appear). You can even specify that you want to Jump to the first phone number in a document. In other words you are not restricted to following predefined links. And you are not restricted to only linking to the documents themselves; you can link to anything you like, at whatever level of detail you like.

Addressing is about what you can refer to when issuing the jump command. You can refer to a document, a paragraph, a word, a URL and so on. Augment featured high resolution addressing, the ability to point to a paragraph was built in, not something the author had to tack on manually like we have to do with HTML today.

Combine jumping with high resolution addressability- the ability to address paragraphs and words and anything you like- and it all gets pretty powerful. Augment can point to any arbitrary object within the document, not just to the whole document - making discussions and references so much easier. Any object, whether on a page or not, can be linked to. High resolution referencing is designed for easy retrieval of anything anyone might want to reference or comment on. I can email you and point to a sentence in a document I want to comment on, not just the whole document.

There is also Implicit linking. Every word is implicitly linked to its definition in a dictionary (for example); every special term is implicitly linked to its definition in that discipline's glossary. Doug's got plenty of these kinds of links, this was just a taster.

THE INTERNET & THE WEB

An abridged and arbitrary history of connected computers, with focus on how we got the Internet we have today connecting computers, connecting people.

• 1940-45 - Claude Elwood Shannon lays the theoretical foundations for digital circuits and information theory.*
• 1960-65 - Donald Davies at the National Physical Laboratory (NPL) in Middlesex, England and Paul Baran at the RAND Corporation independently invent packet-switching.
• 1957 - USSR launched the first Sputnik satellite in to orbit around the earth. In response the USA Department of Defence formed the Advanced Research Projects Agency (ARPA).
• 1961 - Leonard Kleinrock publishes his theories on small packet-switching, the underlying theoretical foundations for a distributed, digital network.
• 1962 - J.C.R. Licklider of MIT writes a series of memos in August discussing his "Galactic Network" concept.
• 1965 - Paul Baran writes a paper on "Distributed Communications on Networks".
• 1965 - ARPA funds a fact finding paper on "Network of time-sharing computers".
• 1967 - ARPA Principal Investigators semi-annual meeting at the University of Michigan, where networking features heavily on the agenda.
• 1968 - After Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ (Request For Comment, a polite way of building technical specifications and protocols, inviting debate and participatory design) was released by DARPA. The contract to formulate and build Network hardware and programmes was won by BBN.
• 1969 - University of California Los Angeles went live, connecting to Doug Engelbart's group at SRI in Menlo Park, California. the first life pulses through the ARPANET, the precursor of the Internet. Doug doesn’t even remember exactly when it was, it was seen as pretty routine at the time, a new piece of networking equipment, new capabilities, exciting, but not more so than the rest of the work they were doing.
• 1972 - The Inter Networking Working Group was formed to help establishing protocols.
• 1974 - Vint Cerf and Bob Kahn publish a paper outlining TCP (Transmission Control Program) allowing for an Internet; a network of networks.
• 1989 - Tim Berners-Lee releases HTTP (Hyper Text Transfer Protocol) protocols which will become the World Wide Web. Hypertext on connected computers for the masses.
• 1990 - ARPANET name ceases to exist.
• 1990 - The first Web browser is developed by Tim Berners-Lee at CERN in Switzerland.
• 1993 - Mosaic, the revolutionary graphical web browser appears, written by Marc Andresen, later to become Netscape and copied - and eclipsed by Microsoft's Internet Explorer. Hypertext with a clean-shaven face. The original browser, developed by Tim Berners-Lee, allowed the end user to create and edit web pages. Marc Andreessen's browser allowed for easier access and navigation, but no creation. This was a major shift.
• 1993 - Hypermedia encyclopaedias sell more copies than print encyclopaedias.
• 1995 - Netscape gains market value of almost $3B on first day of stock.
• 1998 - AOL buys Netscape for $4B.
• 1999 - There are 3.6 million sites. (David Lake at The Standard)
• 2000 - The .com bubble burst and reality sets in.
• Today - Ted Nelson’s ZigZag system works as a prototype.
• Today - AOL Time Warner is dropping the AOL part of their name. Hypertext, networked computes, it’s become a part of our daily lives, the special titles to refer to ‘online’ (as in AOL; America Online) are no longer needed, no longer sexy, just like the ‘horse less’ was dropped from the term ‘horse-less’ carriage the previous time the centuries ticked over.

* A few more notes on Shannon’s work is pertinent. While at MIT he worked with Vannevar Bush, helping to set up differential equations on Bush's differential analyser. His master's thesis, "A Symbolic Analysis of Relay and Switching Circuits" (1940), used Boolean algebra to establish the theoretical underpinnings of digital circuits. Because digital circuits are fundamental to the operation of modern computers and telecommunications equipment, this dissertation was called one of the most significant master's theses ever. An important step taken by Shannon was to separate the technical problem of delivering a message from the problem of understanding what a message means. This step permitted engineers to focus on the message delivery system. Shannon concentrated on two key questions in his 1948 paper “A Mathematical Theory of Communication”: determining the most efficient encoding of a message using a given alphabet in a noiseless environment, and understanding what additional steps need to be taken in the presence of noise. He solved these problems successfully for a very abstract (hence widely applicable) model of a communications system that includes both discrete (digital) and continuous (analogue) systems. In particular, he developed a measure of the efficiency of a communications system, called the entropy (analogous to the thermodynamic concept of entropy, which measures the amount of disorder in physical systems), that is computed on the basis of the statistical properties of the message source. (Some references from Encyclopaedia Britannica.com)
Bandwidth growth. Modems are beginning to be replaced by fast, cheap and always on Internet connections such as DSL, cable modem and satellite. As of the close of 2000, 3,1 million people are connected to the Internet by DSL in the US. As McLuhan noted, speed changes things- speeding up still images makes movies. Fast, cheap and always on Internet connections change the medium in a similarly profound way. No longer a research and communications tool, the Internet becomes an active assistant and notifier, the conveyer of streamed interactive entertainment and as as Sun has been preaching for years, the network becomes the computer.
Everybody to everybody communication When it is as cheap to communicate with someone on the other side of the world as it is with someone in the same building. When it costs as much to send one message as 500, the nature of communication and information changes.
Everybody becomes an interactive publisher. When any individual with an Internet connection can publish on the World Wide Web and have the published site available to anyone else with an Internet connection, everyone becomes a publisher and broadcaster. But being found by someone actively looking is not the same as publicising or marketing the information. The multipoint nature of the communication is between the information and the access points, not between the people.


resulting in: the information explosion

Vannevar Bush raised the alarm in The Atlantic Monthly way back in 1945: "Thus far we seem to be worse off than ever before - for we can enormously extend the record, yet even in its present bulk we can hardly consult it." The problem is that we are living in an increasingly complex world - there is more information we have to deal with - and it is getting worse - the rate of the increase of information we have to deal with is increasing rapidly; exponentially.

A 2001 edition laptop computer (such as the Apple Macintosh PowerBook Titanium G4) is capable of completing a calculation faster than the light takes to travel from its monitor to your eyeballs. Remember when digital desk calculators seemed impressive? We are seeing the beginning of a snowball effect based on technology which becomes 274,432,000,000 (274 billion!) times more powerful in a single human lifetime (which is 77.5 years in the US as of 2002) - the microchip. And then double that again, only a year and a half later, according to Moore's law.
Let's put the speed increase in perspective of one mans career. In 1951 Doug Engelbart had his great epiphany of using computers to augment our capabilities. In 1968 he demonstrated the mouse, windows, hypertext, teleconferencing and most of the other ways we interact with information. From the time Doug started on his quest, in 1950/1 to the time of the demo in 1968, 18 years, computers were already over 4,000 times faster, having doubled in processing capacity 12 times. Today, computers are over 67 million times faster than they were when Doug had his epiphany. In another year and a half, they'll be 140 million times faster. And so on.

"The movie, by sheer speeding up of the mechanical, carried us from the world of sequence and connections into the world of creative configurations and structure."
McLuhan in McLuhan E. & Zingrone F. (1997 )

More information has been produced since Doug's demo than in the previous 5,000 years. About 1,000 books are published internationally every day, and the total of all printed knowledge doubles every eight years, according to Peter Large in Information Anxiety. "The world produces between 1 and 2 exabytes of unique information per year, which is roughly 250 megabytes for every man, woman, and child on earth. An exabyte is a billion Gigabytes. Printed documents of all kinds comprise only. 003% of the total. Digital storage is by far the largest medium for storing information and is the most rapidly growing, with shipped hard drive capacity doubling every year. Magnetic storage is rapidly becoming the universal medium for information storage" (Berkeley).
By the turn of the century, there were 569 million e-mail accounts world wide (Messaging Online 2000). At least 40 percent of Americans were already then using email. Internet traffic is doubling every 100 days (Interactive Week). Dealing with it is taking its toll: Stress costs US industry $200-300 billion annually (Aaron Fischer "Is your career killing you?" Data Communications 1998). The US National Mental Health Association reports that 75%-90% of all visits to physicians are stress related. Will the solution simply be a great new technology? The workers don't seem to think so. 40% want training to deal with the messages in the UK (Mitel 2000).
Do you really feel a couple of million times better equipped to deal with the information?
Does it matter?
The promise of computers augmenting our capability to deal with the world's problems has simply not been realised. Instead of a new world of dynamic assistance and powerful tools, we are being overwhelmed more than helped by the very systems which promise to free us.
We need to ramp up our capability to use this computation more efficiently to deal with the huge volume of data. We cannot naively think we can set our future on automatic.
The NSA, America's National Security Agency listens to our phone calls, monitors our emails. They have amazing information, but they couldn't sift through it intelligently enough - they couldn't communicate internally and externally well enough to prevent the 9/11 attacks. If the NSA were considered a corporation in terms of dollars spent, floor space occupied, and personnel employed, it would rank in the top 10 percent of the Fortune 500 companies. They still messed up.
You can have search algorithms, like Google uses. But you have to know what to look for. You can access information, but can you check its validity, its timeliness, its relevance? You can develop complex, advanced Artificial Intelligence (AI) routines to simulate some of your own thinking.
It's easy to add to the volume of data. It's hard to listen. It's hard to get the right 'stuff' in our heads. It's hard to know what the right 'stuff' is and what it relates to. If we only fill each others hard drives, but don't connect to each others minds and work collectively (as in, truly connect, not sending reports which are not read back and forth - and collectively in different groups, of different kinds of people and specialities) to solve these problems, we are well and truly screwed. This is still the crucial issue facing us.

“By "augmenting human intellect" we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.
This is important, simply because man's problem-solving capability represents possibly the most important resource possessed by a society. The other contenders for first importance are all critically dependent for their development and use upon this resource.”
Engelbart, D. (1962)

Trying to stay afloat is not the answer, you need to start swimming in the information.

the next step, liquid information: information has changed.

Originally, information could only exist through direct interaction. Later it could be frozen, or stored through writing. Then is could be thawed in chunks, processed by early computers. With even a regular desktop computer being capable of 3 billion calculations a second, storing tens of billion bits, chunks hardly seem the correct term anymore.
When the information then gets melted- when it gets digitised, it doesn't revert to its earlier state, it becomes liquid, it doesn't behave in any previously inherent ways, it gathers a new, relative identity.
Previously its identity was in relation to everything else in the physical world. A book is a book.
In cyberspace there are new relationships, relationships with less physical constraints. Depending on the forces it interacts with, it can go anywhere, be processed in any way and change into anything: With computers, you can make music from a picture, paint a picture from sound. You can treat any information in any way to interact with any other information.
Digital information has characteristics far beyond it's name, the ones and zeros, as do you have characteristics beyond your atoms, even your genes.

history of literature relating to blogging

Blogging is a pretty recent phenomena, but much has been written about it, online and off, by Rebecca Blood and Ben Hammersley, amongst others. Much of the knowledge of blogs is of course in the blogs themselves, especially in the blogs of pioneers like Dave Winer and in the first celebrity blog, that of the Baghdad blogger. Conversations with those in the blogging community, especially David Mery of Symbian has more useful than many traditional books. Wider histories of electronic literature has come from discussions with Doug Engelbart and Ted Nelson, continually widening the perspective of what blogs can be.

BLOGGING

The writing of journals have been around for almost as long as there has been writing.

journals

The writing of journals has a long tradition, from the ponderous musings of so many modern blogs, back in time to “o the heaven and earth then contain Thee, since Thou fillest them? or dost Thou fill them and yet overflow, since they do not contain Thee? And whither, when the heaven and the earth are filled, pourest Thou forth the remainder of Thyself?” (http://ccat.sas.upenn.edu/jod/augustine/Pusey/book01 2003) which was written by Augustine about 1500 years ago to Geoffrey Chaucer’s Canterbury Tales; “Of England they to Canterbury wend, The holy blessed martyr there to seek. Who helped them when they lay so ill and weal. Befell that, in that season, on a day. In Southwark, at the Tabard, as I lay. Ready to start upon my pilgrimage. To Canterbury, full of devout homage.” (www.canterburytales.org/canterbury_tales.html 2003)

autobiographical cartoons

Harvey Pekar’s ‘American Splendour’ is a self published, autobiographical comic. For sixteen issues, from 1976 to 1991, Harvey Pekar “was documenting the trials, tribulations and trivia of being a filing clerk and part-time journalist in Cleveland.” (Plowright, F. 2003)
This is of interest to the HyperBlog as it is completely illustrated, whereas the current version of the HyperBlog is text only with the ability to add thumbnail images.
Another issue which comes screaming out from the movie version of ‘American Splendour’ is the question of who would be interested in reading about ‘true, daily life’ where by definition nothing happens? Harvey Pekar say’s in the movie trailer: “Ordinary life’s pretty complex stuff”. (americansplendormovie.com) So does it work? Roger Ebert, of the Chicago Sun-Times gives the movie 4 stars (suntimes.com/output/ebert1/wkp-news-american22f.html 2003)

rss

But then the web came along and the notion of a ‘home-page’ which has never really been properly defined. Is it the user’s page, with links and such to go places? Is it the main page of a site? Only little gnomes inside the net know, and they’re too busy writing 404 error messages. Netscape however, took it to mean a little of both. That only happens after a slightly convoluted route of meta-information, and dreams of a more dynamic web. Ben Hammersley starts the story in “Content Syndication with RSS”:

“The deepest, darkest origins of the current versions of RSS began in 1995 with the work of Ramanathan V. Guha. Known to most simply by his surname, Guha developed a system called the Meta Content Framework (MCF). Rooted in the work of knowledge-representation systems such as CycL, KRL and KIF, MCF’s aim was to describe objects, their attributes, and the relationships between them.
MCF was an experimental research project funded by Apple, so it was pleasing for management that a great application came out of it: ProjectX, later renamed HotSauce. By late 1996, a few-hundred sites were creating MCF files that described themselves, and HotSauce allowed users to browse around these MCF representations in 3D.
It was popular, but experimental, and when Steve Jobs’ return to Apple’s management in 1997 heralded the end of much of Apple’s research activity, Guha left for Netscape.
There he met Tim Bray, one of the original XML pioneers, and started moving MCF over to an XML-based format/ (XML itself was new at the time.) This project later became the Resource Description Framework (RDF). RDF is, as the World Wide Web Consortium (W3C) RDF Primer says, “ a general-purpose language for representing information on the World Wide Web.” It is specifically designed for the representation of metadata and the relationships between things. In its fullest form, it is the basis for the concept known as the Semantic Web, the W3C’s version of a web of information that computers can understand.
This was in 1997, remember. XML, as a standard way to create data formats, was still in its infancy, and much of the Internet’s attention was taken up by the increasingly frantic war between Microsoft and Netscape.
Microsoft had not ignored the HotSauce experience. With others, principally a company called Pointcast, they further developed MCF for the description of web sites and created the Channel Definition Format (CDF).
CDF is XML-based and can describe content ratings, scheduling, logos, and metadata about a site. It was introduced in Microsoft’s Internet Explorer 4.0 and later into the Windows desktop itself, where it provided the backbone for what was then called Active Desktop.
By 1999, MCF was well and truly steeped in XML and becoming RDF, and the Microsoft/netscape bickering was about to start again. Both companies were due to launch new versions of their browsers, and Netscape was being circled for a possible take-over by AOL.
So, Netscape’s move was to launch a portal service, called “My Netscape Network”, and with it RSS.
Short for RDF Site Summary, RSS allowed the portal to display headlines and URLs from other sites, all within the same page. A user could personalise their My Netscape page to contain the headlines from any site that interested them and had an RSS file available. It was basically, a web page-based version of everything HotSauce and CDF had become. It was a great success.”
Hammersley , B. (2003)

I have included a post by Dan Libby (who represents the next step in the history, finally, at Netscape) on the early history to illustrate how new this all is, how it’s changed and how we still have an exciting and open future ahead of us. It also has a few cautionary design tales:

“The original My Netscape Network Vision:
We would create a platform and an RDF vocabulary for syndicating metadata about websites and aggregating them on My Netscape and ultimately in the web browser. Because we only retrieved metadata, the website authors would still receive user's click-throughs to view the full site, thus benefiting both the aggregator and the publisher. My Netscape would run an RDF database that stored all the content. Preferences akin to mail filters, would allow the user to filter only the data in which they are interested onto the page, from the entire pool of data.”...” Tools would be made available to simplify the process of creating these files, and to validate them, and life would be good.
What Actually Happened: 1) A decision was made that for the first implementation, we did not actually need a "real" RDF database, which did not even really exist at the time. Instead we could put the data in our existing store, and instead display data, one "channel" at a time. This made publishers happier anyway, because they would get their own window and logo. We could always do the "full" implementation later.
2) The original RDF/RSS spec was deemed "too complex" for the ‘average user’.
3) We shipped the first implementation, sans tools. Basically, there was a spec for RSS 0.9, some samples, and a web-based validation tool. No further support was given for a while, and I was kept busy working on other projects.
4) At some point, it was decided that we needed to rev the RSS spec to allow things like per item descriptions, i18n support, ratings, and image widths and height. Due to artificial (in my view) time constraints, it was again decided to continue with the current storage solution, and I realised that we were *never* going to get around to the rest of the project as originally conceived. At the time, the primary users of RSS (Dave Winer the most vocal among them) were asking why it needed to be so complex and why it didn't have support for various features, e.g. update frequencies. We really had no good answer, given that we weren't using RDF for any useful purpose. Further, because RDF can be expressed in XML in multiple ways, I was uncomfortable publishing a DTD for RSS 0.9, since the DTD would claim that technically valid RDF/RSS data conforming to the RDF graph model was not valid RSS. Anyway, it didn't feel "clean". The compromise was to produce RSS 0.91, which could be validated with any validating XML parser, and which incorporated much of userland's vocabulary, thus removing most (I think) of Dave's major objections. I felt slightly bad about this, but given actual usage at the time, I felt it better suited the needs of its users: simplicity, correctness, and a larger vocabulary, without RDF baggage.
5) We shipped the thing in a very short time, meeting the time constraints, then spent a month or two fixing it all. :-) It was apparently not deemed "strategic", and thus was never given more than maintenance attention.
6) People on the net began creating all sorts of tools on their own, and publishing how-to articles, and all sorts of things, and using it in ways not envisioned by, err, some. And now we are here, debating it all over again. Fortunately, this time it is in an open forum.
roups.yahoo.com/group/syndication/message/586 (2003)

Simplicity won, richness lost.
Dave Winer who runs the Scripting News weblog and who is a central, vocal proponent of simple RSS comments:

“Weblogs are often-updated sites that point to articles elsewhere on the web, often with comments, and to on-site articles. A weblog is kind of a continual tour, with a human guide who you get to know. There are many guides to choose from, each develops an audience, and there's also camaraderie and politics between the people who run weblogs, they point to each other, in all kinds of structures, graphs, loops, etc. Today, there are hundreds of thousands of weblog sites, and the market for tools for managing such sites is growing quickly. My company, UserLand, makes two products for weblogs, Manila, which is a centralised server-based content management system; and Radio UserLand which provides easy and powerful weblogging from the desktop. The first weblog was the first website, http://info.cern.ch/, the site built by Tim Berners-Lee at CERN. From this page TBL pointed to all the new sites as they came online. Luckily, the content of this site has been archived at the World Wide Web Consortium. (Thanks to Karl Dubost for the link.) NCSA's What's New page took the cursor for a while, then Netscape's What's New page was the big blog in the sky in 1993-96. Then all hell broke loose. The Web exploded, and the weblog idea grew along with it. I did my first weblog in February 1996, as part of the 24 Hours of Democracy website. It helped glue the community together, along with a mail list that was hosted by AOL. In April 1996 I started a news page for Frontier users, which became Scripting News on 4/1/97. Other early weblogs include Robot Wisdom, Tomalak's Realm and CamWorld"”
newhome.weblogs.com/historyOfWeblogs 2003

But isn’t it ironic, all this work has gone on and all I care about is that fact that the current RSS standards (all of them) are pretty simple. It’s a joy to be able to work on creating good systems and good software for something which can be useful for many, without a huge technical overhead to even begin to be able to add value, as you’d have to have to be able to deal with picture files or video files, 3D or even Microsoft Word files. Even email is not simple and pure any more. POP and SMTP, the two main protocols for sending and receiving email, have been subverted by AOL & Microsoft (Hotmail), whose services are only accessible through their own, secret protocols.

did I blog?

My own venture into the Internet started in 1994. A few years later I registered the liquid.org domain, which has sadly become extinct (I now use liquidinformation.org). I would write articles on the site, but instead of organising them by dates, I would organise them by topic, though clearly indicating what articles are new. A blog? That’s not a word I used at the time of course, but since the HTML authoring tools were so ridiculously easy to use (Adobe PageMill, which I still use, easier than Microsoft Word for example) I couldn’t understand why people with web sites didn’t constantly update them and add to them.

The site’s main page, with the link to the ‘Articles’ section in the menu on the bottom:

*

Design wise it’s similar to a few of my other sites, I fell in love with the frames (frowned upon by many, I know) to produce a border around the pages. When you click on the ‘Articles’ link you get this page:

*




Here you can see that one category is already open. Click on an article and you get:

*


‘real’ blogs

Rebecca Blood, blog pioneer and author, tells of how web pages started to be updated on a regular basis in an organised manner:

“In 1998 there were just a handful of sites of the type that are now identified as weblogs (so named by Jorn Barger in December 1997). Jesse James Garrett, editor of Infosift , began compiling a list of "other sites like his" as he found them in his travels around the web. In November of that year, he sent that list to Cameron Barrett. Cameron published the list on Camworld , and others maintaining similar sites began sending their URLs to him for inclusion on the list. Jesse's ' page of only weblogs ' lists the 23 known to be in existence at the beginning of 1999.
Suddenly a community sprang up. It was easy to read all of the weblogs on Cameron's list, and most interested people did. Peter Merholz announced in early 1999 that he was going to pronounce it 'wee-blog' and inevitably this was shortened to 'blog' with the weblog editor referred to as a 'blogger.'
At this point, the bandwagon jumping began. More and more people began publishing their own weblogs.”
www.rebeccablood.net/essays/weblog_history.html (2003)