[ Back to Index of Web Works ] part of George's Web Site

Related Web Works


http://www.bcs.org.uk/news/timbl.htm ------------------------------ BCS Main Menu | Consult Contents | News Stand ----------------------------------------------------------------------------

The World Wide Web - past, present and future

Tim Berners-Lee

[Tim] Press release

Tim Berners-Lee was awarded a Distinguished Fellowship of the British Computer Society on July 17, 1996 at the new British Library in London. The following is a transcript of his presentation:

It is a great honour to be distinguished by such a Fellowship and I should immediately say two things:

* one is that, of course, the Web was developed by a whole lot of people across the Internet, who discovered about it on Internet User Groups and went away with the ideas and started playing and encouraging each other, and developing a little grass-roots community. The Web owes an incredible amount to those across the Internet, and also to the "bosses who didn't say no" who are now all wearing halos across the planet, and who really enabled this sort of thing to grow to the point where they didn't have the option of saying no.

* the other thing I would say is that the ideas existed before putting it together in the World Wide Web, which is basically trivial.

So what's special about it? What I think we are celebrating then is the fact that dreams can come true. So many times it would be nice for things to be this way but they don't come out for one reason or another. The fact that it did work is just so nice; that dreams can come true. That's what I've taken away from it, and I hope that it applies to lots of other things in the future.

I'll go back now over a little bit about the origins, a bit about the present, and just a little bit about the future, very much in overview.

The Past

The original intent of the Web was that it should be let's start with a definition the 'universe of network accessible information'. The point about it being a universe is that there is one space. The most important thing about the Web is this URL space, this nasty thing which starts with HTTP. The point of a URL is that you can put anything in there, so the power of a hypertext link is that it can point to absolutely anything.

That is why, whereas hypertext had been very exciting beforehand, and there had been a little community that had been happily going on for several years making Hypertext systems that worked across a disc or across the file system, when the Web allowed those hypertext links to point to anything and it suddenly became a critical mass, it became really exciting. Maybe that will happen to some other things as well.

In fact the thing that drove me to do it (which is one of the frequently asked questions I get from the press or whoever) was partly that I needed something to organise myself. I needed to be able to keep track of things, and nothing out there, none of the computer programs that you could get, the spreadsheets and the databases, would really let you make this random association between absolutely anything and absolutely anything, you are always constrained.

For example, if you have a person, they have several properties, and you could link them to a room of their office, and you could link them to a list of documents they have written, but that's it. You can't link them to the car database when you find out what car they own without taking two databases and joining them together and going into a lot of work. So I needed something like that.

I also felt that in an exciting place like CERN, which was a great environment to be in, and to start this. You have so many people coming in with great ideas, doing some work, and leaving with no trace of what it is they've done and why they did it the whole organisation really needed this. It needed some place to be able to cement, to put down its organisational knowledge.

And that idea, of a team being able to work together, rather than by sequence of grabbing somebody at coffee hour and bringing somebody else into the conversation, and having a one time conversation that would be forgotten, and a sequence of messages from one person to another.

Being able to work together on a common vision of what it is that we believe that we are doing, and why we think we are doing it, with places to put all the funny little 'this is why on Tuesday we decided not to do that'. I thought that would be really exciting, I thought that would be a really interesting way of running a team, maybe we could work towards that goal: that dream of the 'self-managing team'.

So, that was why these were the original goals. Universal access means that you put it on the Web and you can access it from anywhere; it doesn't matter what computer system you are running, is independent of where you are, what platform you are running, or what operating system you've bought and to have this unconstrained topology, which because hypertext is unconstrained it means you can map any existing structures, whether you happen to have trees of information or whatever.

As people have found, it is very easy to make a service which will put information onto the Web which has already got some structure to it, which comes from some big database which you don't want to change, because Hypertext is flexible, you can map that structure into it.

In the early days, talking over tea with somebody, I was comparing it to a Bobsled: There was a time before it was rushing downhill that there was quite a lot of 'pushing' to be done. For the first two years, there was a lot of going around explaining to people why it was a really good idea, and listening to some of the things that people outside the hypertext community come back with.

The hypertext community, of course, knew that Hypertext was cool, and why doesn't everybody like it? Why doesn't everybody use it? People felt that Hypertext was too confusing - the 'oh, we'll be lost in it won't we' syndrome. Also I was proposing to use an SGML type syntax. SGML at the time was mainly used in a mode whereby you would write an SGML file and you would put it in for batch processing perhaps overnight on an IBM mainframe and with a bit of luck you would find in the morning a laser printed document.

But the idea of doing this SGML parsing and generation of something that could be read in real time was thought to be ridiculous. People also felt that HTML was too complex because 'you have to put all those angle brackets in'. If you're trying to organise information get real you're not going to have people organising it. You 'can't ask somebody to write all those angle brackets just because they want to make something available on an information system, this is much too complex'.

Then there was also a strong feeling, and a very reasonable feeling at CERN, that 'we do high-energy physics here'. If you want some special information technology, somebody has bound to have done that already, why don't you go and find it. So that took me with a colleague, the first convert, Robert Cailliau, to the European Conference on Hypertext at Versailles where we did the rounds of trying to persuade them all these people who had great software and great interfaces and had done all the hard bits to do the easy bit and put it all on-line.

But having, perhaps due to lack of persuasive power, not succeeded in that, it was a question of going home and taking out the NeXT box.

Using NeXT was, I think, both a good and a bad step. The NeXT box is a great development environment, and allowed me to write the WorldWideWeb Program. (At that time it was spelled without any spaces. Now there are spaces, but for those of you who are interested in that sort of thing, there are no hyphens).

So the WorldWideWeb was a program I wrote at the end of 1990 on the NeXT. It was a browser editor and a full client. You could make links, you could browse around it was a demonstration which was fine but, of course, very few people had NeXT and so very few people saw it.

At CERN, there was a certain amount of raised eyebrows and it was clear that we wanted it on MAC, PC and Unix platforms but there wasn't the manpower to do it. So it we went around conferences and said 'hey, look at this. If you have a student, please suggest they go away and implement this in a flashier way and on one of the platforms please'. There was a couple of years of that.

There was also the Line Mode Browser which was the first real proof of universality. The Line Mode Browser was a very simple Web browser that runs on a hard-copy terminal. All you need is the ASCII character set, carriage line feed, and you can browse and print a node out, with little numbers by all the links at the bottom and you can choose a number (I mention these things just because sometimes its worth remembering that the path through A to B is sometimes through C,D and E F and G in totally different places).

It was necessary to put the Line Mode Browser out to get other people who didn't have NeXT able to access the Web, so that nobody has the excuse not to be able to access it. The next thing I see is a newspaper article saying that the WorldWideWeb is a system for accessing information 'using numbers'

There is a snowball effect here. It is very difficult when you produce a new information system. You go to someone and say 'hey, look in here' and they say 'What? What have you got?' 'Its all about the World Wide Web', they say 'big deal'. So you say 'why don't you put some more information in here' and they say 'who's looking in it?' and you have to say 'well, nobody yet because you haven't put any information in yet'. So you've got to get the snowball going. Now that's happened and you can see the results.

Initially the first thing we put on the Web was the CERN phone book which was already running on the mainframe. We did a little gateway which made the phone book appear in Hypertext with a search facility and so on. For the people at CERN there was a time when WWW was a rather strange phone book program - with a really weird interface! During that time gopher was expanding at an exponential rate and there was a strong feeling that gopher was much easier, because with gopher you didn't have to write these angle-brackets.

But the Web was taking off with distributed enthusiasm; it was the system administrators who were working through the night when it got to 6 o'clock in the morning and they decided that 'hey, why bother going home' and they started to read 'alt.hypertext' (yes, hypertext an alternative news group, one of those alternative sciences). Alt.hypertext is where you had to discuss this sort of thing. These systems administrators were the people who would only read the alternative news groups and they would pick up the software and play with it. Then by 8 o'clock in the morning you'd have another Web Server running with some new interesting facet and these things would start to be linked together.

There were some twists and turns along the winding road, there was my attempt to explain to people what a good idea URLs were they were called UDIs at the time, Universal Document Identifiers and then they were called Universal Resource Identifiers, then they were called Uniform Resource Locators (in an attempt to get it through the IETF, I consented that they could be called whatever they liked).

I made the mistake of not explaining much about the Web concepts, so there was a 2-year discussion about what one could use identifiers names and addresses for. It is pretty good if you're into computer science, you know you can talk for any length of time about that kind of thing without necessarily coming to any conclusion.

It's worth saying that I feel a little embarrassed accepting a fellowship when there are people like Pei Wei a very quiet individual who took up the challenge. He read about the World Wide Web on a newsgroup somewhere and had some interesting software of his own; an interpreted language which could be moved across the NET and could talk to a screen.

Basically, he had something very like Java, and as he went ahead and wrote something very much like Hot Java, the language was called 'Viola' and the browser was called 'ViolaWWW'. It didn't take off very quickly because you had to first install 'Viola' nobody understood why you should install an interpreter, and then this 'WWW' in a Viola library area. You had to be system administrator to do all that stuff, it wasn't obvious. But in fact what he did was really ahead of his time. He actually had Applets running. He had World Wide Web pages with little things doing somersaults and what have you.

Then there was a serious turning point when someone at NCSA brought up a copy of 'Viola' and Mark Andreeson and company saw it and thought 'Hm, we can do that'. Mark Andreeson worked the next 14 nights, or something, and had Mosaic.

One other thing he did was put in images, and after that the rest is more or less history. Nothing had really changed from the Line Mode Browser in that Viola was just a browser, it was not an editor. And the same for Erwise which had preceded it. In fact there is another one called Cello which had been written for the PC which preceded Mosaic, and in each case they wrote a World Wide Web client which is a piece of software which can browse around the Web, but unlike the original program you couldn't actually edit or make links very easily.

I think this was partly because NeXTStep software has got some neat software for making editable text, which is difficult to do a WYSIWYG Word processor from the ground up. But it is also because when you get to the Browser, you get all excited about it, you get people mailing you and end up having to support it and answer questions about it.

Mark Andreeson found himself deluged by excitement in Mosaic, and still we didn't have anything which could allow people to really write/create links easily with a couple of key strokes, until NaviPress - who's heard of NaviPress - a little Company bought by AOL and now called AOL Press. They are still there, and a number of other editors which actually allow you to go around and make links although still not as intuitively as I would have liked.

So those are some of the steps, there are lots of other ones and many anecdotes, but this was the result as seen from CERN [refers to Figure showing straight line growth of use of WWW on CERN server, with vertical axis on an logarithmic scale]. This shows the load on the first WWW server. By current terms its not a very big hit rate. Across the bottom is from July '91 to July '94 and there is a logarithmic scale up the side of total hits per day.

The crosses are week days and the circles are weekends and you can see what happened I call that a straight line you can see that every month, when I looked at the log file it was 10 times the length of the log file for the same month the previous year. There are a couple of dips in August and there are a couple of places where we lost the log information when the server crashed and things.

People say 'When did you realise that the Web was going to explode like this?' and 'when did it explode?'. In fact if you look, there was the time when the geek community realised that this was interesting, and then there was the time when the more established computer science and high energy physics community realised that this was interesting, and then there is when Time and Newsweek realised it was interesting. If you put this on a linear scale, you can pick your scale and look for a date on which you can say it exploded, but in fact there wasn't one. It was a slow bang and it is still going on. It's at the bottom of an 'S' Curve and we are not sure where the top is.

The Present

And then after the bang we are left with the post-conceptions (the reverse of pre-conceptions). One of those was that because the first server served up Unix files, there was an assumption that those things after the HTTP: had to be Unix file names. A lot of people felt locked into that, and it was only when Steve Putts put up an interesting server where URLs were really strange but would generate a map of anywhere on the planet to any scale you wanted and with little links to take you to different places and change the scale.

After a few other really interesting servers which had a different sort of information space, the message got through that this is an opaque string and you could do with it what you like. This is a real flexibility point, and it's still the battle to be fought. People try to put into the protocols that a semi-colon here in the URL will have a certain significance, and there was a big battle with the people who wrote 'Browser' that looked at the '.html' and concluded things about what was inside it wrong! URL is not a file name, it is an opaque string and I hope it will represent all kinds of things.

People kept complaining about URLs changing - well, that was a strange one because URLs don't change, people change them. The reasons people change them are very complex and social (and that gets you back into the whole naming and addressing loop) but there was a feeling for a while that there should be a very simple, quick cure to making a name space, in which you would just be able to name a document and anybody would be able to find it.

After a lot of discussions in the ITF and various fora, it became clear that there was a lot of social questions here, about exactly who would maintain that, and what the significance of it was, and how you would scale it. In fact there is no free lunch, and it is basically impossible.

There was the assumption that, because links were transmitted within the HTML that they had to be stored within HTML files, until people demonstrated that you could generate them, on the fly, from interesting programs. And from the assumption that clients must be browsers it seemed to follow that they can't be editors - for some reason although everybody has got used to WYSIWYG in every other field, they would not put up with WYSIWYG in the Web.

But people had to write HTML you have to write all those angle brackets. It was one of the greatest surprises to me that the community of people putting information on line was prepared to go and write those angle brackets. It still blows my mind away, I'm not prepared to do it, it drives me crazy.

But now we hear back from these people who got so into writing the angle brackets, that HTML is far too simple; we need so many exciting new things to put in it because we need to be able to put footnotes, and frames, and diagonal flashing text that rotates and things. Didn't things change over those few years?

And where are we now? Well, what you actually see when you look at the Web is pretty much a corporate broadcast medium. The largest use of the Web is the Corporation making a broadcast message to the consumer. I'd imagined initially that there would be other uses and I talked a bit about group work business, but clearly once you've put something up, if there is any incentive - whether it is psychological or monetary or whatever, because your audience is very large, it is very easy for you to push it up the scale, it pays you very much to go for that global audience. You can afford to put in so much more effort if you have got a global audience for your advertising, for your message, subtle or not. So that is what is seen. And there is some cool stuff.

There is VRML; 3-D sites where you wander through 3-Dimensional space, maybe that will become really interesting (actually I think it will happen because to do 3-D on a machine you need a fast processor but you don't need a fast phone line and I think the fast processors are coming a lot faster than the fast phone lines). So 3-D is something which may happen a long time before video.

There are style sheets coming out which will allow you to do that flashing orange diagonal rotating text. You can redo all your company wide documents with the flick of a button just by changing the style sheet without having to change all that HTML.

There's Java, which is really exciting. At last the Web has given the World an excuse to write in a decent programming language instead of C or Fortran. Begging your pardon, there have been object oriented programming languages before now, but if a real programmer programmed in one, typically the boss would come round and say 'sorry, that's fine, son, but we don't program like that in this organisation', and you have to go away and re-write it all in C. Just the fact that the Web has been there to enable a new language to become acceptable is something.

What's the situation with the Web itself as an information space? From the time when there was more than one browser, there was tension over fragmentation. Whenever one browser had a feature, an adaptation of the protocol, and the other one didn't, there was a possibility that the other one would adapt, would create that feature but use a slightly different syntax, or a very different syntax, or a deliberately different syntax.

You get places where you find a little message which says, 'this page has been written to work with Mosaic 5.6 or NetScape 3.0 or Internet Explorer 2.8 or whatever it is, and its best for you to use that browser'.

And now? Do you remember what happened before The Web? Do you remember this business when you wanted to get some information from another computer: you had to go and ask somebody how to use this telnet program, you had to take it to someplace and FTP files back onto a floppy disc, you picked the floppy disc up and went down the corridor, and it wouldn't even fit in your computer!

When you got yourself a disc-drive that would take it, then the disc format was wrong, so you got yourself some software, and with someone's help you could read the format on it, and what you got was a nice binary WordStar document and there was no way you could get it into Word Perfect or Word Plus 2.3 - remember that? Do you remember how much time you spent doing all that?

Well, the people who put these little things at the bottom of their web pages saying this is best viewed using 'Fubar' browser are yearning, yearning to get back to exactly that situation. You'll have 17 Web browsers on your page and you'll get to little places which say 'now please switch to this' and 'now please switch to that' and suddenly there is not one World Wide Web, but there a whole lot of World Wide Webs.

So if any of you have got Web Masters out there put those little buttons on there saying this is best used using a particular browser, suggest they put 'this is best used using a browser which works to the following specifications: HTML 3.2', or something like that. You can go back this evening; email them from your homes, and tell them that I just mentioned it.

So there is a tension of fragmentation, what are we going to do about it? In 1992 people came into my office, un-announced, from large companies, sometimes more than one company at a time. I remember one in particular when four people came and sat down around a table and banged it and said 'Hey, this Web is very nice but do you realise that we are orienting our entire business model around this. We are re-orienting the company completely, putting enormous numbers of dollars into this, and we understand the specifications are sitting on a disc you have somewhere here. Now what's the story, how do we know it is still going to be there in 10 years, and how do we put our input into it?'

I asked, of course, what they felt would be a good solution to that, and I did a certain amount of touring around and speaking to various institutes, and the result was I felt there was a very strong push for a neutral body. Somewhere where all the technology providers, the content providers, and the users can come together and talk about what they want; where there would be some facilitation to arrive at a common specification for doing things. Otherwise we would be back to the Tower of Babel.

So hence the World Wide Web Consortium. The Consortium has 2 hosts; INRIA in France for Europe, and MIT for North America. We are also looking at setting up various things in the Far East. We have 145 members at the last count (maybe it's 150 now it seems that the differential between my counting and my talking about it is 5). We are a neutral forum - we facilitate, we let people come together.

We actually have people on the staff who have been editing Web specs, are aware of the architecture, are basically very good. They can sit in on a meeting and edit a document, know when people are saying silly things, and produce a certain amount of advice. We have to move fast.

We are not a standards organisation, I'm sorry. We do not have meetings from every one of our 150 or whatever it is, countries in the world sitting round, and we do not have 6 month timescales. Sometimes we have to move extremely rapidly when there is a need for something in the marketplace and the community wants to have a common way of doing it. So we don't call what we do 'standards' we call them 'specifications' .

We have just introduced a new policy by which we can simply ask the members whether they think something is a good idea, and if they do then we call it a 'recommendation' as opposed to a 'standard'. In fact what happens is that when we get together, the engineers who know what they are talking about from the major players (the are primary experts in the field), write a little piece of specification, put their names on it, and its all over bar the shouting. Everybody takes that 'spec' and runs with it, de-facto 'standards' arrive in most cases.

But every area is different and so we have to be very flexible. Some areas we have to consult, we have to be more open, there are more people who want to be involved. In some areas we have to just move extremely rapidly because of political pressure.

At the same time we like to keep an eye on the long-term goals, because although the pressures are fairly short-term there is a long-term architecture. There are some rules in the World Wide Web; like the fact that URLs are opaque; like the fact that you don't have to have HTTP at the beginning of a URL but you can move onto something else; like the fact that HTTP and URLs are independent specifications and HTML is independent of HTTP, you can use it to transport all kinds of things.

If, originally, the specs had fixed that the World Wide Web uses HTTP and HTML we wouldn't have Java applications or other things being transported across the Web. We wouldn't be able to think about new protocols.

The Future

Its worth saying a word about the long-term goals. There is still a lot of work before this can be an industrial strength system, so that when you click on the link you know you are going to get something.

There are a lot of things that have got to change, such as redundancy which has got to be able to happen, just fixing everything 'under the hood' so that you can just forget about the infrastructure. Something which is very complicated, involves some pretty difficult problems in computer science, and its important.

More on the evident side; I have a horizontal scale between the individual human interaction at the end, through to the corporation talking to the masses? I'd originally imagined that the point about the Web was that you would also be able to have personal diaries, and in that personal diary you'd be able to make a note, and you'd be able to put a pointer to the family photograph album, and your brother's photograph album, which are just accessible to the family, or the extended family.

You would be able to put a pointer to a meeting you've got to go to at work, but the meeting agenda would be just visible to and used by a little group of people working together, and that in turn would be linked to things in the organisation of the town you are living in, such as the school. Imagine that you have a range of things going up through what is called the Intranet (the World Wide Web scaled down for corporate use), to the whole global media thing, and that this would all be one smooth continuum.

I thought it was simple, we just needed to get browser/editors which were good and then we would be able to play. To a certain extent that's true.

When we do have browser editors we'll be able to do a lot more, but there is a lot more that you need. You need to have trust; you need to be able to make sure that other people don't see those photograph albums and what have you. There is a lot of infrastructure that has still to be put together, but I am very interested in the Web being used across that scope.

I'm also interested in these machines that we all have on our desks being actually used to help us. What they are doing at the moment is delivering things for us to read, decisions for us to make and information for us to process. For us to process! Hey, what about these computer things? I thought the idea was they were supposed to do some of the work. At the moment they can't.

They could, in fact, do it when its a database but they haven't a chance on the Web because everything on the Web is written in bright shining pink and green for your average human reader, who can read English (who can read pretty bad English at times), so if you and I have difficulty parsing it, going out and asking a machine to solve the problem is pretty difficult at the moment.

Let's suppose there is a house for sale and you want to buy it. You would like to know that person really owns it. Suppose you don't have a Land Registry, so you go and you find the Title Deeds, which are on the Web, as are all the transfers of ownership going way back. They are there, but it's a lot of work to go back through all of them unless they are put in a form that is actually a semantic statement, some knowledge representation language statement.

Knowledge representation is another thing that people have played with, but it really hasn't taken off in a tremendous way on a local scale. Maybe it is something that, if we can get the architecture right globally, then that would take off too. Then you would be able to simply ask your computer to go out and find an interesting house, the sort of house you like, within your price range, and see if it is really owned by the person who is selling it (or whether in fact they sold off half of the back garden 10 years ago but they hadn't told you). It would be able to go and make all the assumptions, it would be able to figure out in fact whether the documents it reads it ought to believe, by tracing through the digital signatures and what have you.

Those are some long-term goals. They are not things that the press, the consortium, the members, the newsgroups, talk about all the time, but they are things we are trying to keep in the back of our minds.

I'll go through very rapidly the areas that W3C is actually developing or could develop. There are basically 3 areas of work:

The User Interface and Data Formats, the parts of the architecture and protocol which are affected by, and specifically affect, the sort of society that we can build on the Web. The sort of things that are in the user interface area are the continual enhancement of HTML for more and more features, putting different sorts of SGML documents onto the Web, solving the internationalisation problem (or at least trying to take those pieces of the internationalisation problem, type-setting conventions, such as type-setting in different directions and character sets) and trying to take those solutions which exist and show how you can use them in a consistent way on the Web. Style sheets, graphics in 3-Dimensions. The PNG format, for example, is a new graphics format to replace the graphics interchange format because its bigger and better, which we have been encouraging (although not doing ourselves). Most of the user interface and data formats work is done in Europe.

There is the whole area of Web protocols in Society, security and payment and the question of how parents can prevent their children from seeing material which they don't want them to be viewing until they are old enough. It is this pressure to protect a child, until the age of digital majority, particularly in the United States but also in Germany and various other countries, that has produced the Platform for Internet Content Selection, or PICS system. This is an initiative which has produced specifications which should, I hope, be in software and usable by the end of 1996.

There are other exciting things on the horizon. Such as protocols to actually transfer semantic information about international property rights. Can you imagine taking the licence information on the back of a floppy disc, one of those in such small type that if you blew it up to a readable size it would probably be poster size, and actually trying to code that up into some sort of semantic language I can't, but maybe we can work in that direction. Questions of how to find the demographics of who is looking at your site without infringing the privacy of any individual person.

The third area of Web architecture is looking at the efficiency and integrity of the Web. How do you prevent the problems of dangling links; find out when you have linked to a document which no longer exists; rapidly and painlessly.

How do you get copies of heavily used documents out to as many places as you can, all over the planet, and having done that, how does the person in the arbitrary place find out where the nearest one is? These are part of the unsolvable naming problem.

In general, we are aiming to bring the thing up to industrial strength. We had a workshop about this recently. There is the question of whether these things we find on the Web are really objects and what does that mean? Does this mean that the distributed object world should somehow merge, there should be some mapping between Web objects and distributed objects. What does this mean?

And that raises the question of mobile code objects, which actually move the classes around. There are lots of exciting things going on, not that the average user would notice apart from the fact that they get little gismos turning corners of the tops of their Web pages when Java applications come over.

There is just one more thing that I want to emphasise. I initially talked about the Web, and said that I wanted it to be interactive. I meant this business about everybody playing at the level where you have more than one person involved but not the whole Universe. Perhaps you've got a protected space where you can play.

I feel that people ought to be able to make annotations, make links and so get to the point where they are really sharing their knowledge. I talked about interactivity. I found people coming back to me and saying 'Isn't it great that the Web is interactive' and I'd say 'Huh?'. 'Well you know you can click on these buttons on forms and it sends data right straight into the machine'.

I felt that if that is what people meant by interactivity then maybe we need another word (I say this with total apology because I think people who make up new words are horrible) but lets just for the purpose of this slide talk about intercreativity, something where people are building things together, not just interacting with the computer, you are interacting with people and being part of a whole milieu, a mass which is bound together by information.

Hopefully with the computers playing a part in that too. To do that we need to integrate people with real-time video that you hear so much about. Why isn't it better integrated with the Web? Why can't I when I go to the library the virtual library that is find people's faces and actually start talking to them? Why don't I meet somebody in the library?

The nice thing about the virtual library is that you are allowed to talk in it, except that talking protocols haven't been hooked into the Web protocols yet, so we just need to do a little hooking together (Ha! 'a little bit of hooking together there' sounds like 3 years work of solid standardisation meetings).

How about having objects that you can manipulate. I'd like to be able to hold a virtual meeting in a 3-Dimensional area where there is a table and where you can move the chairs around, and when I move the chairs you see it happen.

We could build graphs and models, mathematical models, real models, engineering models, little models of new libraries, to see if we can make them look nice sitting next to St.Pancras Station or something. I'd like to be able to see all that happen in the Web so that means building into the infrastructure objects; objects which know how to be interacted by many people at once and being able to update their various many instances and copies throughout the World.

The military folks use 3-D digital simulation technology for playing tank battles and maybe there will be some good stuff coming out of that, I don't know. But a very simple thing would be to notify somebody that something has changed. It's great having this model of global information - you write something, I go in and I change it, and put an important little yellow post-it sticker on it, but if you don't find out that I've done it then it's not very much use, so we need to have ways of notifying both people and machines that things have changed.

I would like to see people get more involved in this; at the moment, it doesn't look like one great big television channel, but lots and lots and lots and lots of very shallow television channels and basically the mouse is just a big television clicker. There must be more to life.

So, let me conclude with a few challenges that as a community we have. One is making the most of this flexibility, we have got to, we need to keep it flexible. We need to be able to think our way past the Web as a set of linked Hypertext documents.

Hopefully pretty soon the Web infrastructure, the information space, will be just a given, like we assume IP now (we don't worry about IP, although we should, because its running out of address space and all kinds of stuff and nobody is funding the transatlantic links). We just kind of assume that Internet protocol is there, the Internet is there and we are worrying about how we build the Web on top of it.

We've got to make sure that there is somebody there having the next bright idea and can use that flexibility to make something which has got a totally different topology which is used to solve a totally different problem. To do that, we have got to make sure that we are not, in our designs, constraining the future evolution, we're not putting in those silly little links between specifications.

Let me give you just one example: It is possible with some browsers to put a piece of HTML on the Web. The server delivers it to the browser and inside one of the tags is an attribute, and the attribute value is quoted, and inside the attribute value is a quoted string. It's normally used to be able to write 10 for something like a point size or a width or a whatever but now you can put a little piece of Javascript in there and some browsers, if they don't see 10 but something in curly brackets, they will just send it off to the Javascript interpreter.

Now if you've actually got a Javascript interpreter this is dead easy. You can do that in 2 lines of code; just take the curly brackets off and call Javascript, but just think what's happened. In ten years time, to figure out what it meant, not only do you have to look up the old historical HTML space, but you have also got to find Javascript. Javascript is going to be changing and so you thought you had a nice, well defined language, but it's just one line's reference from that specification to the other specification.

In fact, you've got a whole big language specification except that in one part of it it's got angle brackets, and the other part of it has curly brackets and semi-colons; and they are totally different, one thing is totally incomplete and the other is self modifying. And so, by not saying 'by the way this document is in HTML and Javascript 2.0 that one little trap then its the sort of thing which could trip us up later.

The third thing which is really important is that we have to realise that when we define these protocols and the data formats, we are defining things like the topology of the information. We are defining things like who can get access to what information. We are defining things about privacy; about identity; how many identities you can have; whether it is possible to be anonymous; whether it is possible for some central body to do anything at all; whether it is possible for a central body to do lots of things like find out the identity of anonymous people.

Whether there is a right for two people to have a private conversation, which we rather assume at the moment because they can go into the middle of a big field, but does that right hold in Cyberspace? If it does, does this mean that the world will fall apart because terrorism will be so easy? Do all these questions about society come back to the protocols we define, which define the topology in the properties of Cyberspace.

So if you think you're a computer programmer. If you think you're a language designer. If you think you're a techie; and one of the nice things about being a technie is that you can forget all that ethics stuff because everybody else is doing that and thank goodness you didn't have to take those courses you are wrong.

Because when you design those protocols you are designing the space in which society will evolve. You are designing, constraining the society which can exist for the next 10-20 years.

I'll leave you with that thought.

Tim Berners-Lee 17 July 1996

Press release

---------------------------------------------------------------------------- BCS Main Menu | Consult Contents | News Stand ---------------------------- http://www.w3.org/pub/WWW/People/Berners-Lee-Bio.html/9602affi.html ----------------------------------------------------------------------------

Declaration presented by Tim Berners-Lee

I, Timothy J Berners-Lee, depose and state as follows:

1. I am the inventor of the World Wide Web and the Director of the World Wide Web Consortium (W3C). The W3C is a consortium of over 120 computer and communications companies who have come together to maintain and develop the technical standards that are at the heart of the World Wide Web. The W3C is operated within the Laboratory for Computer Science at the Massachusetts Institute of Technology.

BACKGROUND OF THE WORLD WIDE WEB

2. Purpose. I created the World Wide Web (W3) to serve as the platform for a global, online store of knowledge, containing information from a diversity of sources, and accessible to Internet users around the world. Though information on the Web is contained in individual computers, the fact that each of these computers is connected to the Internet through W3 protocols allows all of the information to become part of a single body of knowledge. It is currently the most advanced information system developed on the Internet, and embraces within its data model most information in previous networked information systems such as ftp, gopher, wais, and Usenet.

3. History. W3 was originally developed at CERN, the European Particle Physics Laboratory, and initially used to allow information sharing within internationally dispersed teams of researchers and engineers. Originally aimed at the High Energy Physics community, it has spread to other areas and attracted much interest in user support, resource recovery, and numerous other areas which depend on collaborative and information sharing. The Web has extended beyond the scientific and academic community to include business-to-business communication, political organizing and activism, community development, library collection management, art display and archiving, alternative dissemination mechanisms for a variety of popular music, and access to local, state and federal government information.

4. Basic Operation. The World Wide Web is a series of documents stored in different computers all over the Internet. Documents contain information stored in a variety of formats, including text, still images, sounds, and video. An essential element of the web in that any document has an address (rather like a telephone number). Most web documents contain "links". These are short sections of text or image which refer to another document. Typically the linked text is blue or underlined when displayed, and when selected by the user, the referenced document is automatically displayed, wherever in the world it actually is stored.

Links for example are used to lead from overview documents to more detailed documents, from tables of contents to particular pages, but also as cross-references, footnotes, and new forms of information structure.

Many organizations now have "home pages". These are documents which provide a set of links designed represent the organisation, and through links from the home page, guide the user directly or indirectly to information about or relevant to that organisation.

As an example of the use of links, if this affidavit were to be put on a World Wide Web site, it's home page might contain links such as these:

* BACKGROUND OF THE WORLD WIDE WEB * PUBLISHING ON THE WORLD WIDE WEB * DESIGN AND ARCHITECTURE OF THE WORLD WIDE WEB * MEANS FOR PROTECTING CHILDREN FROM INAPPROPRIATE MATERIAL AND AVOIDING UNWANTED MATERIAL

Each of these links takes the user of the site from the beginning of the affidavit, to the appropriate section within the document. Links may also take the user from the original Web site to another Web site on another computer connected to the Internet. These links from one computer to another, from one document to another across the Internet, are what unify the Web into a single body of knowledge, and what make the Web unique.

PUBLISHING ON THE WORLD WIDE WEB

5. Publishing. The World Wide Web exists fundamentally as a platform through which individuals and organizations can communicate through shared information. When information is made available, it is said to be published on the web. Publishing on the Web simply requires that the "publisher" has a computer connected to the Internet and that the computer is running W3 server software. The computer can be a simple as a small personal computer costing less than $1500 dollars or as complex as a multi-million dollar mainframe computer. Many Web publishers chose instead to lease disk storage space from someone else who has the necessary computer facilities, eliminating the need for actually owning any equipment oneself.

6. The Web, as a universe of network accessible information, contains a variety of documents prepared with quite varying degrees of care, from the hastily typed idea, to the professionally executed corporate profile. The power of the web stems from the ability of a link to point to any document whatsoever its status or physical location. Like paper, the Web is a universal medium, with nothing built into its nature to constrain the organization or content when it is used.

7. Information to be published on the web must also be formatted according to the rules of the Web standards. These standardized formats assure that all Web users who want to read the material will be able to view it in. Web standards are sophisticated and flexible enough that they have grown to meet the publishing needs of many large corporations, banks, brokerage houses, newspapers and magazine which now publish "online" editions of their material, as well as government agencies, and even courts, which use the Web to disseminate important information to the public. At the same time, Web publishing is simple enough that thousands of individual users and small community organizations are using the Web to publish their own personal "home pages," the equivalent of individualized newsletters about that person or organization and available to everyone on the Web.

8. Web publishers have a choice to make their web sites open to the general pool of all Internet users, or close them, and thus make the information accessible only to those with advance authorization. Many publishers chose to keep their sites open to all in order to give their information the widest potential audience. In the event that the publishers chooses to maintain restrictions on access, this is generally accomplished by assigning specific user names and passwords as a prerequisite to access to the site. Or, in the case of Web sites maintained for internal use of one organization, access will only be allowed from other computers within that organization's local network. While these access restrictions are possible, there is no mechanism built into the World Wide Web which allows publishers to restrict access to adults alone, or to keep minors from accessing the publishers site.

9. Searching the Web. A variety of systems have developed which allow users of the Web to search particular information among all of the public sites that are part of the Web. Services such as Yahoo, Magellan, Altavista, Webcrawler, and Lycos are all services known as "search engines" which allow users to search for Web sites that contain certain categories of information, or to search for key words. For example, a Web user looking for the text of Supreme Court opinions would type the words "Supreme Court" into a search engine, and then be presented with a list of World Wide Web sites that contain Supreme Court information. This list would actually be a series of links to those sites. Having searched out a number of sites which might contain the desired information, the user would then follow each link, browsing through the information on each site, until the desired material is found. For many content providers on the Web, the ability to be found be these search engines is very important.

DESIGN AND ARCHITECTURE OF THE WORLD WIDE WEB

9. Common standards. The Web links together disparate information on an ever growing number of Internet-linked computers by setting common information storage formats (HTML) and a common language for the exchange of Web documents (HTTP). Though the information itself may be in many different formats, and stored on computers which are not otherwise compatible, the basic Web standards provide a basic set of standards which allow communication and exchange of information. Despite that fact that numerous types of computers are used on the web, and the fact that many of these machines are otherwise incompatible, those who "publish" information on the Web are able to communicate with those who seek to access information with little difficulty because of these basic technical standards.

10. A distributed system with no centralized control. Running on tens of thousands of individual computers on the Internet, the Web is what is known as a distributed system. The Web was designed so that organizations with computers containing information can become part of the Web simply by attaching their computers to the Internet and running appropriate World Wide Web software. No single organization controls any membership in the Web, nor is there any centralized point from which individual web sites or services can be blocked from the Web. From a user's perspective, it may appear to be a single, integrated system, but in reality it has no centralized control point.

11. Contrast to closed databases. The Web's open, distributed, decentralized nature stands in sharp contrast to most information systems that have come before it. Private information services such as Westlaw, Lexis/Nexis, Dialog, etc. have contained large storehouses of knowledge, and, can be accessed from the Internet with the appropriate passwords and access software. However, these databases are not linked together into a single whole, as is the World Wide Web.

12. Success of Web in research, education, and political activities. It is my observation that the World Wide Web has become so popular because of its open, distributed, and easy-to-use nature. Rather than requiring those who seek information to purchase new software or hardware, and to learn a new kind of system for each new database of information they seek to access, the Web environment makes it easy for users to jump from one set of information to another. By the same token, the open nature of the Web makes it easy for publishers to reach their intended audiences without having to know in advance what kind of computer each potential reader has, and what kind of software they will be using.

MEANS FOR PROTECTING CHILDREN FROM INAPPROPRIATE MATERIAL AND AVOIDING UNWANTED MATERIAL

13. World Wide Web community sees the need to enable parents to protect children. With the rapid growth of the Internet, the increasing popularity of the Web, and the existence of material online that may be inappropriate for children, the World Wide Web community saw the need to build systems that enable parents to control the material which comes into their homes and may be accessible to their children. The World Wide Web Consortium launched the PICS ("Platform for Internet Content Selection") program in order to develop technical standards that would support parents' ability to filter and screen material that their children see on the Web. Given the nature of the Web, PICS developers determined that the most effective point of control over the flow of content to children is at the user end of the information chain, rather than at the content provider end. User control, as implemented through the PICS standards, gives parents the means to select which content is appropriate for their own children, and which content should be blocked.

14. User control is more effective than information provider control because Web site operators who publish information have no ability to verify the age, or in many cases even the identity, of those who access the publisher's Web site.

15. No Age Verification Standards. At present, I am not aware of any methods in the technical standards that make up the World Wide Web which would enable a Web site operator or publisher to establish the age of a user attempting to access a Web site. Establishing age through credit card verification is burdensome for all Web site operators and not practical for those Web sites which do not otherwise have a commercial relationship with their users. I believe that non-commercial Web sites would be forced to shut down if required to check the ages of their users through credit card verification. Even commercial sites will face significant burden if credit card verification is required before all user access. The cost of each verification by a credit card clearinghouse is , I understand, between $1 and $2. Sites which have thousands or millions of 'hits' per day, will certainly face significant cost if such age verification is required.

16. The web was designed with a maximimum target time to follow a link of one tenth of a second. Response times greater than this have been shown to reduce human effectiveness in solving problems when using systems of linked information. Practical web document retrieval times are currently between that and a few seconds. When rating the usability of the web, users have in a recent survey indicated that speed of access is their foremost concern.

17. The Web is a international system: an example of one of its many uses is the provision of health information to developing countries. Any system of adult verification would need to work efficiently internationally.

I declare under penalty of perjury that the foregoing is true and correct.

Timothy J Berners-Lee

Date: 28 February 1996

----------------------------------------------------------------------------

http://www.w3.org/pub/WWW/People/Berners-Lee-Bio.html/FAQ.html ----------------------------------------------------------------------------

Press FAQ

I feel that after a while if I answer the same questions again, I will start answering rather mechanically, and will forget important steps, and after a while it won't make sense. So I have put a few answers from my outgoing mail in this list to save everyone time. But this list is (c) TBL so don't quote without permission. Thanks.

W3C and standards

Q: What role does the W3c play in setting standards?

A:W3C's mission is to relaize the full potential of the web, by bringing its members and others together in a neutral forum.

The W3C has to move rapidly (time measured in "web years" = 2.6 months) so it cannot afford to have a traditional Standards process. What has happened to date has been that W3C has, by providing a neutral forum and facilitation, and also with the help of its technically astute staff, got a consenses among the developers about a way to go. Then, this has been all that has been needed: once a common specification has been prepared and a general consensus among the experts is seen, companies have been running with that ball. The specifications have become de facto standards. This has happened with for example HTML TABLS, and PICS. Now

in fact we have decided to start using not a full standards process, but a process of formal review by the W3C membership, in order to draw attention to specifications, and to cement their status a little. After review by members, the specifcations will be known as W3

(See process of review)

Q:What do you make of the branding attempt of companies, by putting little icons on their home pages saying, "best when viewed with Microsoft Explorer, or Navigator?"

This comes from an anxiouness to use the latest proprietoy features which have not been agreed by all companies. It is done either by those who have an interest in pushing a particular company, or it is done by those who are anxious to take the community back to the dark ages of computing when a floppy from a PC wouldn't read on a Mac, and a Wordstar document wouldn't read in Word Perfect, or an EBCDIC file wouldn't read on an ASCII machine. It's fine for individuals whose work is going to be transient and who aren't woriied about being read by anyone.

However, corporate IT strategists should think very carefully about commiting to the use of features which will bind them into the control of any one company. The web has exploded because it is open. It has developed so rapidly because the creative forces of thousands of companies are building on the same platform. Binding oneself to one compoany means one is limiting one's future to the innovations that one company can provide.

Q: What role do standards play in today's hyper competitive, and fast-changing marketplace?

A: Common sppecifications are essential. This competition, which is a great force towward innovation, would not be happening if it were not building on a base of HTTP, URL and HTML standards. These forces are strong. They are the forces which, by their threat to tear the web apart into fragmented incopmpatible pieces, force companies toward common specifications.

Q:Is it overly ambitious to think standards can be set and adhered to? Are they a relic of a kinder, gentler era?

A: Do you think that incompatability, the imposisbility of transefering information between different machines, companies, operating systems, applications, was "kinder, gentler"? It was a harsh, frustrating era. The Web has brought a kindness and gentlness fro users, a confidence in technology which is a balm for IT depatments everywhere. It has bought new hope. As a result, graet things are happening very fast. So this is a faster, more exciting era.

Companies know that it only interesting to compete over one feature until everyone can do it. After that, that feature becomes part of the base, and everyone wants to do it in one, standard, way. The smart companies are competing on the implementations: the many other aspects such as functionality, speed, ease of use and support which differentiate products. June 96

Machinery

Q: What sort of computer do you use?

A: At work, an HP712/80 running NeXTStep with dual colour screens acting as one big screen. At home or on the road, I use a ThinkPad 760CD with 800x600 active matrix screen.

Everywhere, I use a PSION 3a for notes and agenda and phone numbers and if I could get a radio PPP connection on that I'd love to use it for email and web access.

Spelling of WWW

Q: How in fact do you spell World Wide Web?

A: It should be spelled as three seperate words, so that its acronym is three seperate "W"s. There are no hyphens. Yes, I know that it has in some places been spelled with a hypehn but the official way is without. Yes, I know that "worldwide" is a word in the dictionary, but World Wide Web is three words.

Often, WWW is written and read as W3, which is quicker to say. In particular, the World Wide Web consortium is W3C, neverWWWC.

On collaboration and automatabilty, Sept 95

The web today is a medium for communication between people, using computers as a largely invisible part of the infrastructure. One of the long-term goals of the consortium is "Automatability", the ability for computers to make some sense of the information and so help us in our task. It has been the goal of mankind for so long that machines should help us in more useful ways than they do at present, help us solve some of those human problems. Maybe this is one of the many ideas (like hypertext) which the web's great scale will allow to work where it did not achieve critical mass on a small scale before. So there are groups looking at a web of knowledge representation. It could be that some scientific field will be the first to be sufficiently disciplined to input its data not just as cool hypertext, but in a machine-readable form, allowing programs to wander the globe analysing and surmising.

The W3 Consortium started to address this goal with its recent workshop on Collaboration on the Web. The ability of machines to process data on the web for scientific purposes such as checking a scientist's private experimental data against public databases, require databases to be available not only in a raw machine-readable form, but also labelled in a machine readable way as to what they are.

The knowledge engineering field has to learn how to be global, and the web has to learn knowledge engineering, but in the end this might be a way in which again the scientific field leads the world into something very powerful, and a new paradigm shift.

March 95

Q: How did you come to arrive at the idea of WWW? I arrived at the web because the "Enquire" (E not I) program -- short for Enquire Within Upon Everything, named after a ?victorian? book of that name full of all sorts of useful advice about anything -- was something I found really useful for keeping track of all the random associations one comes across in Real Life and brains are supposed to be so good at remembering but sometimes mine wouldn't. It was very simple but could track those associations which would sometimes develop into structure as ideas became connected, and different projects become involved with each other.

I was using Enquire myself, and realised that (a) it would fulfill my obligation to the world to describe what I was doing if everyone else could get at the data, and (b) it would make it possible for me to check out the other projects in the lab which I could chose to use or not if only their designers had used Enquire and I had access.

Now, the first version of enquire allowed you to make links between files (on one file system) just as easily as between nodes within one file. (It stored many nodes in one database file). The second version, a port from NORD to PC then VMS, would not allow external links.

This proved to be a debilitating problem. To be constrained into database enclosures was too boring, not powerful enough. The whole point about hypertext was that (unlike most project management and documentation systems) it could model a changing morass of relationships which characterized most real environments I knew (and certainly CERN). Only allowing links within distinct boxes killed that. One had to be able to jump from software documentation to a list of people to a phone book to an organizational chart to whatever .. as you can with the web today. The test rule was that if I persuaded two other projects to use it, and they described their systems with it, and then later at any point a module,person etc in one project used something from another project, that you would be able to add the link and the two webs would become one with no global change -- no "flag day" involving the merging of two databases into one, no scaling problems as the number of connected things grew. Hence the W3 design.

The same lesson applies now to the webs of trust we will be building with linked certificates.

So the requirement was for "external" links to be just as easy to make as "internal" links. Which meant that links had to be one way.

(There was also a requirement that the web should be really easy to add links to, but though that was true in the prototype we are only now starting to see betas of good commercial web editors now.)

June 94

This was an interview in Internet world by Kris Herbst. His questions are his (c) of course. Slightly edited.

IW: What did you think of the first WWW'94 conference?

TBL: Great! It had a unique atmosphere, as there were people from all walks of life brought together by their excitement about the Web. As it was the first one, they hadn't met before, so it was a bit unique. It was very oversubscribed, as you know, so the next one will have to be a lot bigger.

IW: Can you tell us something about your early life, and how those experiences might have influenced you later as you developed WWW?

TBL: That's the first time I've been asked to trace WWW history back that far! I was born in London, England. My parents met while developing the Ferranti Mark I, the first computer sold commercially, and I grew up playing with five-hole paper tape and building computers out of cardboard boxes. Could that have been an influence? Later on I studied physics as a kind of compromise between mathematics and engineering. As it turned out, it wasn't that compromise, but it was something special in its own right. Nevertheless, afterward I went straight into the IT industry where more things seemed to be happening. So I can't really call myself a physicist. But physicists spend a lot of time trying to relate macroscopic behavior of systems to microscopic laws, and that is the essence of the design of scalable systems. So physics was probably an influence.

IW: What led you to conceive the WWW?

TBL: I dabbled with a number of programs representing information in a brain-like way. Some of the earlier programs were too abstract and led to hopelessly undebuggable tangles. One more practical program was a hypertext notebook I made for my own personal use when I arrived at CERN. I found I needed it just to keep track of the -- how shall I say -- flexible? creative? -- way new parts of the system, people and modules were added on and connected together. The project I'd worked on just before starting WWW was a real-time remote procedure call, so that gave me some networking background. Image Computer Systems did a lot of work with text processing and communications -- I was a director before coming to CERN.

IW: What elements in your background or character helped you to conceive WWW as a way to keep track of what was happening at CERN?

TBL: Elements of character?! Anyone who has lost track of time when using a computer knows the propensity to dream, the urge to make dreams come true, and the tendency to miss lunch. The former two probably helped. I think they are called Attention Deficiency Disorder now. ;-)

IW: Do you have some favorite Web sites for browsing?

TBL: (Sigh) I wish I did, but I hardly spend any time browsing. Historically, I appreciate the people who were first and showed others how things could be -- Franz Hoesel's Vatican Library, of course, Steve Putz's map server, lots more.

IW: How do you feel about the fact that WWW promises to generate large amounts of money for some persons?

TBL: If it's good, people will want to buy it, and money is they way they vote on what they want. I believe that system is the best one we have, so if it's right, sure people are going to make money. People will make money building software, selling information, and more importantly doing all kinds of "real" business, which happens to work much better because the Web is there to make their work easier. The web is like paper. It doesn't constrain what you use it for: you have to be able to use it for all of the information flow of normal life. My priority is to see it develop and evolve in a way which will hold us in good stead for a long future. If I, and CERN, hadn't had that attitude, there probably wouldn't be a web now.

Now, if someone tries to monopolize the Web, for example pushes proprietary variations on network protocols, then that would make me unhappy.

---------------------------------------------------------------------------- TimBL

---------------------------------------------------------------------------- http://www.webweek.com/95Aug/news/berners.html ---------------------------------------------------------------------------- NEWS

T O M O R R O W ' S W E B

Berners-Lee Speaks on Web's Future

Inadequacy of existing measures is prompting new companies to come forward with tracking ideas of their own

By Russell Shaw

In a special public appearance at the Internet Society's annual conference held in June in Honolulu, World-Wide Web founder Tim Berners-Lee endorsed an even more user-friendly paradigm for the entity he invented in 1989 at the European Particle Physics Laboratory in Switzerland.

Speaking before an enthralled crowd of more than 1,500 in the main ballroom of the Sheraton Waikiki, Berners-Lee outlined several goals for the Web. These include greater interoperability, automatability, extensibility, efficiency, scalability and security.

He cast the interoperability issue in terms of open standards available across different browser platforms.

"Most of us believe there's a divine right of the consumer to be able to choose where he buys his software from that runs on the PC. We want to keep that independent of the market, and we do that with interoperability, with open standards. At the same time we want to keep the capacity for change," he said.

Berners-Lee feels that the automatability and expandability issues are linked, and says that the Web should be enhanced with new protocols that will make it possible to harness the vast processing power of computers, rather than to just communicate through them.

"At the moment," he said, "the Web is very easy for people to use, but very difficult for computers to use. We would like to see the Web be something computers can help us with as well, but now, all computers can do is provide us information on a stream."

The scalability issue would come in by means of new protocols that will be able to simplify a transmitted image when it is accessed via a slower modem over which graphic file reception would take a long time.

"For the Web to grow, protocols will have to be enhanced, and new ones added," he said. "There are things we'll be able to do by introducing new data formats. The content types which are just the content rather than say 'this is animation in GIF with 256 colors.' This would allow people with slower lines to more carefully use the bandwidth."

To enrich the graphical possibilities of fonts in a way not possible today, Berners-Lee also backs the concepts of style languages for HTML. "Documents should be able to specify 36 point in blue background, for example. We need a separate language to define that," he noted.

This ties in with one of Berners-Lee's broader goals -- upgrading the Web to handle object-oriented computing. He'd like to see a language that will, for example, be able to describe the features on someone's face. "We would like to see documents become objects," he explained. "If you stand by and describe what kind of an object that is, there needs to be a language describing what properties it has."

The Web founder also would like to see some video whiteboard capability within the Web that would make it possible to change part of a Web page while it is running. He cited a typical exchange between genetics labs as one example. "If you have a piece of DNA on one end, and a DNA synthesizer on the other, you might want to synthesize the DNA on-screen. For that, you'd want to go to a very high level of functionality, and very specific content languages -- rather than negotiation and extension," he said.

Berners-Lee added that he would like to see a "Web of trust" climate, in which thorny issues like intellectual property rights and voluntary self-rating of sites for possible objectionable content, are a matter of course.

To Berners-Lee, solving these issues within the framework of increased computer processing power will be the key to the Web reaching its full potential. "As the Web continues to grow, you'll need all the power and ability of the Web to access and negotiate formats," he concluded.

Reprinted from Web Week, Volume 1, Issue 4, August 1995 © Mecklermedia Corp. All rights reserved. Keywords: standards Date: 19950801 html design

[http://www.iworld.com] ----------------------------------------------------------------------------


[ Back to Index of Web Works | Top ]