Advertisement

Built for Speed

Share via
TIMES STAFF WRITER

Coming soon to a computer near you: a souped-up Internet that can prioritize transmissions, pool the resources of supercomputers spread around the world and allow people in different locations to attend meetings in 3-D, computer-generated “rooms.”

These are just some of the capabilities researchers are creating for the next generation of the Internet. The advances will be available first in universities and government agencies, such as NASA. But within five or 10 years, the benefits are expected to filter into the global computer network used by everyday customers.

Nearly three-quarters of U.S. households haven’t connected to the Internet, but computer scientists at universities and government research labs around the country are already planning the sequel to today’s network. Some are focused on building the infrastructure and protocols that will be the foundation of the advanced network, while others are creating applications that will make the most of the network’s capabilities.

Advertisement

The nationwide efforts will get a boost in California later this month when a consortium of universities turns on the California Research and Education Network, or CalREN-2, which will carry data through the state more than 100 times faster than today’s Internet. Other networks in development will be even faster. Computer scientists hope to begin rolling out a patchwork of advanced networks around the turn of the century.

“The Internet was never designed to do all the things it’s doing now,” said Kay Howell, director of the National Coordination Office for Computing Information and Communications, which oversees the Next Generation Internet Initiative, one of several federally funded programs. In addition to boosting the capacity of future networks, Howell said, “we’re conducting research and development in revolutionary applications for basic science, crisis management, education, the environment, health care and manufacturing.”

One of the most ambitious of the various research efforts is SuperNet, a project from the Defense Advanced Research Projects Agency, which was responsible for creating the precursor to today’s Internet three decades ago. With the ability to carry data 1,000 or even 10,000 times faster than the 1.5 megabits-per-second rate that is standard with today’s T1 lines, SuperNet will surely live up to its name, said Bert Hui, assistant director of DARPA’s information technology office.

Advertisement

But SuperNet won’t just be a fatter pipe. Its speed will come from being able to send packets of data where they need to go more efficiently than today’s Internet. SuperNet will have fewer layers of protocols--the sets of rules that govern the way computers communicate and exchange data--to contend with, and that should allow data packets to get where they need to go more quickly, Hui said. The SuperNet project is also working to improve the interfaces between backbone networks and the smaller networks that feed into them.

Other research efforts, like the Next Generation Internet Initiative for government agencies and its educational counterpart, Internet2, are also devoting resources to building higher-capacity infrastructure.

Members of the University Corp. for Advanced Internet Development are working on the Internet2 project by building their own high-capacity regional network hubs called “gigapops.” California’s CalREN-2 will include two gigapops--one in the Los Angeles basin and another in the Bay Area--that can transfer data fast enough to download an entire 30-volume encyclopedia in less than two seconds.

Advertisement

The gigapops--which number 24 so far--will connect to each other via the Abilene Network, an advanced backbone network being built by Qwest Communications International, Northern Telecom and Cisco Systems. Some universities will also use the National Science Foundation’s very-high-speed Backbone Network Service, or vBNS, for their connections.

One of the most important features of these advanced networks is their ability to classify data in different priority groups, with some applications taking precedence over others. A multitiered priority system would resemble the snail-mail world of the U.S. Postal Service, where customers can pay extra for overnight delivery or receive a discount if they’re willing to send their parcels fourth-class, said David Wasley, director of projects for the Corp. for Education Network Initiatives in California, or CENIC.

*

Today, all data traveling on the Internet receive the same priority. But some transmissions--such as e-mail dispatches--can afford a short delay, whereas others--like videoconferencing sessions--must be delivered at a speedy rate in order to work properly. A tiered priority system would ensure that applications that demand lots of network resources would work smoothly, even if many people are using the Net at the same time.

That will make possible a range of applications such as distance learning and videoconferencing. It could even be used for a futuristic version of virtual reality called tele-immersion, in which two or more people share a computergenerated space even if they are actually thousands of miles apart, said Greg Wood, director of communications for Internet2.

Tiered classes of service would also allow researchers to use highly specialized--often remote--scientific instruments from the comfort of their own offices. For example, scientists who want to use a cutting-edge electron microscope in Osaka, Japan, must now fly across the Pacific Ocean, test their samples, then fly home and analyze the data.

“What would obviously be more convenient is if you could FedEx your sample to Japan, have someone put it in the microscope, and then be sitting in your lab in front of a computer in a virtual reality space and get a very good picture almost instantaneously,” said Carl Kesselman, a project leader at USC’s Information Sciences Institute in Marina del Rey.

Advertisement

Kesselman and Ian Foster of Argonne National Laboratories in Illinois are trying to do that with Globus, a project to develop basic software infrastructure that can link computing and information resources at many locations.

“We want to do to computers what the power grid does for electricity,” said Kesselman, who is also a visiting associate in Caltech’s computer science department. “We want you to be able to sit down, plug into the network and have access to the computing resources that you need.”

That doesn’t involve laying any new wires or building new computers. But it does require a new software infrastructure so that existing computers can communicate with one another in such a way that they can share resources.

Kesselman and his colleagues performed an important test earlier this year. They used 13 supercomputers in nine locations to conduct a military training exercise with 100,000 simulated tanks, Jeeps and other vehicles moving on a complex 3-D surface.

Before many of these newfangled applications can reach their full potential, engineers must find a way to classify data packets according to their priority. They must also devise a method for allocating network resources so that all classes of packets will be served appropriately.

At UCLA’s Internet Research Laboratory, computer scientists are working on a solution to that second problem. They believe it is best to use a dual-layer system that will allow data moving between Internet domains to be considered separately from data moving within a single domain. Otherwise there will be too many packets for a network to keep track of when it divides up its resources, said Lixia Zhang, the associate professor of computer science who runs the lab.

Advertisement

Zhang and her colleagues are developing a protocol called RSVP to help computer systems manage their resources. Once they know what their needs are, RSVP acts like a guide to set up the system accordingly, she said.

Computer scientists are also looking for ways to make future generations of the Internet smarter. SuperNet researchers, for example, are working to build smart routers and switches that can predict likely episodes of network congestion and reroute traffic to head off a problem, said Hui of DARPA.

UCLA’s Zhang is trying to develop a better system for copying popular Web pages and re-posting them around the Internet to minimize the distance that data must travel to get to end users.

*

When Mars Pathfinder sent its first pictures back to Earth, requests flooded into the Web site at the Jet Propulsion Laboratory in Pasadena. So, NASA set up a series of handmade mirror sites to handle the huge number of requests. Zhang wants to find a way to automate that process. Otherwise, she said, “this information superhighway will be jammed.”

Zhang’s solution, called adaptive Web caching, would automatically make copies of popular Web pages, and the more popular they are, the more copies there will be. Each repository of these copies, called a cache, will communicate with the others around it. Then, when a request comes in, it can be routed to the nearest location that contains a copy of the desired Web page, Zhang said.

Taken together, these and other improvements are sure to make the sequel to the Internet a much more useful computer network than it is today. But for those working in the trenches, the future is already here.

Advertisement

“I don’t see a division between the current generation and the next generation of the Internet,” Zhang said. “It just keeps changing and growing.”

Times staff writer Karen Kaplan can be reached at [email protected].

Advertisement