Looking to the Year 2000: Alternatives in Campus Data Networking

Looking to the Year 2000:
Alternatives in Campus Data Networking

Noam H. Arzt (arzt@isc.upenn.edu)
Daniel A. Updegrove (dan_updegrove@yale.edu)
University of Pennsylvania
Philadelphia, PA 19104

Presented at CAUSE95
New Orleans, LA
November 29, 1995

Abstract

At CAUSE94, a University of Pennsylvania presentation, "Designing and Implementing a Network Architecture Before It Becomes Obsolete," focused on the methodology for designing a network architecture on a decentralized, diverse campus. This presentation focuses more specifically on the emerging vision of the University's campus data communications network, and a set of structured architectural alternatives under consideration. Using the World Wide Web remains the primary vehicle for disseminating information about plans and assumptions to the campus and the world.

The authors acknowledge the assistance and long-standing collaboration of Penn colleague, Ira Winston, in the development of this paper and the concepts underlying it.

Introduction

Many commentators have dubbed 1995 as "the year of the Internet." Studies point to exponential growth in networks and individuals connected, Web sites and other resources accessible, traffic carried on institutional and wide- area backbones, stock prices of Internet-related companies, number of Internet (paper) millionaires, and articles about the Internet phenomenon. Less discussed, but more critical, is an understanding of strategies for accommodating (or at least coping) with this exponential growth.

At the University of Pennsylvania during the fall semester, we have been busy coping with such insurmountable opportunities as:

Penn is not alone, of course, in facing the consequences of the growth in demand for Internet access and network support. In fact, 40 university representatives meeting in Keystone, Colorado in October reached consensus on six key network strategy issues that require attention on all our campuses (and, ideally, efforts toward cooperative solutions). These issues are:

All six of these issues are on the agenda of Penn's Network Architecture Task Force. This paper focuses on the technical infrastructure domain.

Abut the University of Pennsylvania

Penn is a private, research university founded in Philadelphia in 1740. Enrollment numbers 22,000, with 10,000 undergraduates in four schools and 12,000 graduate and professional students in twelve schools. Roughly 7,000 students live in campus residences; nearly all others live in walking distance. The University shares a compact, attractive campus with a 750-bed teaching hospital, large clinical practices in medicine, dental medicine, and veterinary medicine, and has an annual operating budget of $1.9 billion. The 23,000 staff and faculty reside in a three-state region (Pennsylvania, New Jersey, and Delaware); comparably few are within walking distance.

As one of the originators of responsibility center management, Penn has promoted autonomy, investment, and expertise in Schools and other units. Accordingly, all academic computing is managed outside the central unit, Information Systems and Computing. ISC is responsible for most core University administrative systems development and operations, data administration, a central help function, and data and video networks. (Voice services report to a different Vice President; Hospital IS, data, video, and voice services are separate.)

The Network Architecture Task Force

As detailed at CAUSE94, a Network Architecture Task Force was charged in spring 1994 to assess the current state of data, voice, and video networking at Penn, and to make recommendations for changes to these architectures during a three- to five-year planning cycle. Of the ten members of the NATF, the majority are drawn from outside ISC, including the director of Telecommunications, the director of library systems, and the director of computing in the School of Engineering and Applied Science, who serves as co-chair.

The NATF methodology, derived from work of Project Cornerstone, Penn's aggressive initiative to re-engineer business processes and deploy modern, client-server administrative systems (described at CAUSE93 and CAUSE94), is depicted below.

The Technical Architecture is a blueprint for how future technology acquisitions and deployment will take place. It consists of standards, investment decisions, and product selections for hardware, software and communications. The Technical Architecture is developed first and foremost based on university direction and business requirements. Additionally, principles are used rigorously to be sure the Technical Architecture is consistent with Penn's information technology beliefs. The current (de facto) technical architecture is taken into consideration, as well as relevant industry and technology trends.

For the discussion that follows, readers will find it useful to have access to detailed diagrams of current and alternative architectures. These diagrams are available on the Web at URL, [http://www.upenn.edu/computing/group/natf/].

Three Basic Architectures

Three basic architectural alternatives have been defined along a continuum from least aggressive to most aggressive with respect to the reliability, performance, and functionality they enable. These three basic alternatives represent a migration path that can be followed one to the other if Penn chooses. As markets and products develop, Penn may skip one or more alternatives in the "pipeline," or implement other variations that develop.

It is important to understand that not all elements of these architectures are different. Common elements include the following:

Alternative A: Pervasive Ethernet Switches/Selective 100 Mb

Alternative A is the closest to PennNet's current condition. It preserves our current investment in the technology and operations of a central routing core, installs Ethernet switches in all buildings, continues EIA/TIA 568 as the wiring standard, but only increases speeds within and between buildings to greater than 10 Mb/sec on a case by case basis.

Major features include:

Alternative B: Fully Switched Core

This alternative presents a transition point between Alternative A and Alternative C. The only changes are in the central routing core ("Inter- building backbone"). Rather than a collapsed backbone of routers, the central hub now uses an ATM switch coupled to a "super router" to route between the subnets. A series of smaller routing switches, still located in a central core, start to share a distributed routing load. While management and operations continue to benefit from a single, consolidated location for this equipment, Penn moves one step closer to being able to distribute its routing load to multiple locations when necessary. The nature of the routers and switches at the center are now changing substantially, both in terms of cost and the relative functionality of each object (switching versus routing).

Since ATM switching is now a feature, some direct ATM connections are made possible into the production network either to support advanced projects now in production or servers that require the added bandwidth.

Alternative C: Pervasive ATM

This alternative represents where the Task Force believes Penn should be in 3-5 years. This is mostly dependent on the necessary investment level, but even more important on the development of products and standards in the marketplace to make deployment of or migration to this alternative possible.

Major features include:

Three Additional Variations

Three additional architectural alternatives recognize that the marketplace may not develop in the directions we expect, and/or Penn may need to improve the performance of PennNet in advance of the availability of components to build Alternative C.

Alternative A': Pervasive 100+ Mb Backbone

In most respects this alternative is identical to Alternative A, except that in this case there is the need for all buildings to be connected to the campus backbone using a 100+ Mb/sec technology. To accommodate this bandwidth to every building, the campus backbone needs to change: the collapsed backbone is now interconnected via ATM switch to increase capacity at the core. Subnets are connected to central routers via shared or dedicated connections using 100+ Mb/sec technology.

Alternative AB': Distributed Routing with 100+ Mb Backbone

If the availability of the products needed to implement Alternative C becomes more distant, this alternative may provide some necessary solutions. It provides for a regionalized campus with several clusters of buildings connected together via 100+ Mb/sec technology, and fully distributed routing to each building.

Major features include:

  • Inter-building backbone: FDDI switch deployed to handle multiple FDDI rings (or other 100+ Mb/sec technology) required in this architecture. Central routing is only for outside or inter-campus connections.
  • Intra-building backbone: Clusters of buildings are connected to the backbone via 100+ Mb/sec technology (FDDI or fast Ethernet) for increased bandwidth. Ethernet or fast Ethernet switches deployed in all buildings reduce the size of the collision domain and provide a scalable building interconnection.

    Alternative B': Selective ATM

    This alternative allows the campus to migrate more slowly to ATM for inter- building connections.

    Major features include:

    Network Alternative Pro's and Con's

    Network Alternatives "Pro's & Con's"
    Alternative "Pro" "Con"
    Alternative A
  • easy to implement today
  • less expensive compared to other alternatives
  • perpetuates use of today's technologies
  • Alternative B
  • starts down the road to distributed routing
  • may reduce per port costs of central routers and increase overall bandwidth of the routing core
  • significantly newer technologies in the routing core which use proprietary protocols for distributed routing
  • depending on timing, ATM may still be too immature
  • Alternative C
  • probably where we want to be...
  • can't buy it today
  • presumes a lot about market directions
  • Alternative A'
  • will definitely increase bandwidth to buildings today
  • very expensive
  • perpetuates use of today's technologies
  • Alternative AB'
  • will definitely increase bandwidth to buildings today
  • very expensive
  • perpetuates use of today's technologies
  • introduces additional operations issues as routing devices are distributed to each building
  • Alternative B'
  • allows for slower migration to ATM
  • multiple generations of backbone technology difficult (and expensive) to operate and maintain
  • Next steps

    The University's department of Data Communications and Computing Services (DCCS), in conjunction with the Network Architecture Task Force, is currently processing the above information and carrying out the following tasks:

    In addition, discussions are underway to

    Conclusions

    Designing and deploying a cost-effective, high-performance, campus-wide networking infrastructure is extremely challenging, given the rapid pace of technological change, user demand, and vendor reshuffling. At Penn, the challenge is multiplied by our decentralized management, budgeting, and academic computing structures. It is becoming increasingly clear to most stakeholders, however, that in networking, we must, as our founder Ben Franklin exorted, "hang together, or we will most assuredly hang separately." The productive collaboration exemplified by the Network Architecture Task Force bodes well for Penn's networking future.

    This paper, links to technical diagrams and other information presented at CAUSE95, and a link to the CAUSE94 paper are available on the World Wide Web at URL,


    Please address comments or questions to Dr. Noam Arzt, arzt@isc.upenn.edu or Mr. Daniel Updegrove, dan_updegrove@yale.edu
    [12/12/96]
    URL: http://www.hln.com/noam/cause95.html