Looking to the Year 2000: Alternatives in Campus Data Networking
Looking to the Year 2000:
Alternatives in Campus Data Networking
Noam H. Arzt (arzt@isc.upenn.edu)
Daniel A. Updegrove (dan_updegrove@yale.edu)
University of Pennsylvania
Philadelphia, PA 19104
Presented at
CAUSE95
New Orleans, LA
November 29, 1995
Abstract
At CAUSE94, a University of Pennsylvania presentation,
"Designing and Implementing a Network Architecture Before It Becomes
Obsolete,"
focused on the methodology for designing a network architecture on a
decentralized, diverse campus. This presentation focuses more
specifically on
the emerging vision of the University's campus data communications
network, and a set of structured architectural alternatives under
consideration. Using the World Wide Web remains the primary vehicle for
disseminating information about plans and assumptions to the campus and
the world.
The authors acknowledge the assistance and long-standing collaboration of
Penn colleague,
Ira Winston,
in the development of this paper and the
concepts underlying it.
Introduction
Many commentators have dubbed 1995 as "the year of the Internet." Studies
point to exponential growth in networks and individuals connected, Web
sites and other resources accessible, traffic carried on institutional
and wide-
area backbones, stock prices of Internet-related companies, number of
Internet
(paper) millionaires, and articles about the Internet phenomenon. Less
discussed, but more critical, is an understanding of strategies for
accommodating (or at least coping) with this exponential growth.
At the University of Pennsylvania during the fall semester, we have been
busy coping with such insurmountable opportunities as:
- Unveiling of a secure Netscape front-end to mainframe-based student
record and student financial services data. Students were delighted with
the
new access to their records, until we withdrew the service after the
"secure"
Netscape encryption algorithm was cracked by two Berkeley students.
- Roll out of a new, graphical front-end to unify traditional network
services
by Penn's Wharton School, which dramatically increased demand
- Swamping of the central Help Desk, with the heaviest load coming from
students off-campus struggling with PPP installation and configuration on
Intel platforms
- Architecting a higher-bandwidth Internet gateway in cooperation with
PREPnet, our regional provider
- Criticism from some students that we withheld support for Windows95,
which had been released days before the start of the semester. The daily
paper
headline read, "University scorns Windows95"
- Upgrading overloaded campus news server and modem pools
- Queries from faculty on leave, such as, "telnet response is slow from
California; what's wrong with your network?"
- Queries from staff, such as, "The Provost needs to be enabled for MOO
access in order to participate in the English Department's real-time
poetry
discussions
Penn is not alone, of course, in facing the consequences of the growth in
demand for Internet access and network support. In fact, 40 university
representatives meeting in Keystone, Colorado in October reached
consensus
on six key network strategy issues that require attention on all our
campuses
(and, ideally, efforts toward cooperative solutions). These issues are:
- Remote access. Anytime, anywhere access sounds great, but modem pools
are costly and inadequate for multimedia. What about ISDN, CATV,
wireless,
outsourcing?
- Capacity. What will the growth curve look like as Pentium and PowerPC
workstations are deployed widely, GUI Web browsers proliferate, and
desktop
audio and video mature?
- Technical infrastructure. What roles for Ethernet, fast Ethernet,
FDDI,
ATM?
- Security. No one is satisfied with reusable, plain-text passwords,
and
amateur systems administrators can't keep pace with professional
crackers.
- Network management. We all need better monitoring, diagnosis, and
trouble ticketing systems.
- Financing. Price performance is improving, but not keeping pace with
demand. Full-costing, marginal costing, and "library models" all have
advocates -- and problems.
All six of these issues are on the agenda of Penn's Network Architecture
Task
Force. This paper focuses on the technical infrastructure domain.
Abut the University of Pennsylvania
Penn is a private, research university
founded in Philadelphia in 1740.
Enrollment numbers 22,000, with 10,000 undergraduates in four schools and
12,000 graduate and professional students in twelve schools. Roughly
7,000
students live in
campus residences;
nearly all others live in walking distance.
The University shares a compact, attractive campus
with a 750-bed teaching
hospital, large clinical practices in medicine, dental medicine, and
veterinary
medicine, and has an annual operating budget of $1.9 billion. The 23,000
staff
and faculty reside in a three-state region (Pennsylvania, New Jersey, and
Delaware); comparably few are within walking distance.
As one of the originators of responsibility center management, Penn has
promoted autonomy, investment, and expertise in Schools and other units.
Accordingly, all academic computing is managed outside the central unit,
Information Systems and
Computing.
ISC is responsible for most core
University administrative systems development and operations, data
administration, a central help function, and data and video networks.
(Voice services
report to a different Vice President; Hospital IS, data, video, and voice
services are separate.)
The Network Architecture Task Force
As detailed at CAUSE94, a Network Architecture Task Force
was charged in
spring 1994 to assess the current state of data, voice, and video
networking at
Penn, and to make recommendations for changes to these architectures
during a three- to five-year planning cycle. Of the ten members of the
NATF,
the majority are drawn from outside ISC, including the director of
Telecommunications, the director of library systems, and the director of
computing in the School of Engineering and Applied Science, who serves as
co-chair.
The NATF
methodology, derived from work of
Project Cornerstone,
Penn's aggressive initiative to re-engineer business processes and deploy
modern,
client-server administrative systems (described at
CAUSE93
and
CAUSE94),
is depicted below.
The Technical Architecture is a blueprint for how future technology
acquisitions and deployment will take place. It consists of standards,
investment decisions, and product selections for hardware, software and
communications. The Technical Architecture is developed first and
foremost
based on university direction and business requirements. Additionally,
principles are used rigorously to be sure the Technical Architecture is
consistent with Penn's information technology beliefs. The current (de
facto)
technical architecture is taken into consideration, as well as relevant
industry
and technology trends.
For the discussion that follows, readers will find it useful to have
access to detailed diagrams of current and alternative architectures. These
diagrams are
available on the Web at URL, [http://www.upenn.edu/computing/group/natf/].
Three Basic Architectures
Three basic architectural alternatives have been defined along a
continuum
from least aggressive to most aggressive with respect to the reliability,
performance, and functionality they enable. These three basic
alternatives
represent a migration path that can be followed one to the other if Penn
chooses. As markets and products develop, Penn may skip one or more
alternatives in the "pipeline," or implement other variations that
develop.
It is important to understand that not all elements of these
architectures are
different. Common elements include the following:
- Internet connections: Eliminating our reliance on SMDS for connection
to
the Internet, FDDI is used to allow a scaled connection up to T-3 speed
via
PrepNet or another access provider.
- Inter-campus connections: SMDS appears more appropriate for scalable
connections to remote sites within our metropolitan (e.g., New Bolton
Center, Center for Judaic Studies), replacing dedicated T-1 service.
HUPnet is
connected via a routed connection scaling from 10 Mb/sec as necessary.
- Remote access: The analog modem pool continues to scale to meet
demand,
shifting to a combination of analog 28.8 bps, and digital ISDN lines
(capable of
supporting multiple protocols). Commercial access providers may
supplement these pools especially for outlying areas.
- Advanced networks: Penn will provide coordinating support for
advanced
networking initiatives that may run counter to our conventional
deployments. This will likely include swifter adoption of ATM and direct
Internet or vBNS connections for certain projects.
- Treatment of legacy network environments: The ISN asynchronous
network is eliminated early in 1996; asynchronous terminal server
connections are completely replaced by Ethernet early in 1997. No new
investments are made in 10-base-2 wire or electronics, and users are
transitioned as buildings are renovated and rewired.
- Miscellaneous elements:
- Network infrastructure servers are consolidated onto fewer platforms
with
better redundancy
- AppleTalk and IPX are routed campus-wide and access is provided for
remote users
- Desktop hardware and software standards continue to evolve, as
Windows
95 use surges
- PennNames, a system for creating a campus-wide unique user name
space,
transitions to feed a DCE secure core, and client/server services
(including
network access) transition to Kerberos.
Alternative A: Pervasive Ethernet Switches/Selective 100 Mb
Alternative A is the closest to PennNet's current condition. It
preserves our
current investment in the technology and operations of a
central routing core, installs Ethernet switches in all buildings,
continues
EIA/TIA 568 as the wiring standard, but only increases
speeds within and between buildings to greater than 10 Mb/sec on a case
by
case basis.
Major features include:
- Inter-building backbone: Collapsed backbone interconnected via FDDI
remains, though with fewer, more powerful routers.
- Intra-building backbone: Buildings are connected to the backbone via
Ethernet, or 100+ Mb/sec technology (FDDI or fast Ethernet) where
necessary
for increased bandwidth. Ethernet or fast Ethernet switches deployed in
all
buildings reduce the size of the collision domain within buildings and
provide a scalable building interconnection.
- Wiring strategies and standards: EIA/TIA 568 continues to be the
wiring
standard. Ethernet switches are deployed within closets if necessary,
though
shared Ethernets within buildings dominate. Secure hubs prevent
promiscuous listening on shared segments. Some 100-Base-X fast Ethernet
outlets are deployed. Campus Ethernet connections migrate towards
"personal Ethernet" to allow local hubs on 10-base-T or fast Ethernet
outlets.
Alternative B: Fully Switched Core
This alternative presents a transition point between Alternative A
and
Alternative C. The only changes are in the central routing core ("Inter-
building backbone"). Rather than a collapsed backbone of routers, the
central
hub now uses an ATM switch coupled to a "super router" to route between
the subnets. A series of smaller routing switches, still located in a
central core,
start to share a distributed routing load. While management and
operations
continue to benefit from a single, consolidated location for this
equipment,
Penn moves one step closer to being able to distribute its routing load
to
multiple locations when necessary. The nature of the routers and switches
at
the center are now changing substantially, both in terms of cost and the
relative functionality of each object (switching versus routing).
Since ATM switching is now a feature, some direct ATM connections are
made possible into the production network either to support advanced
projects now in production or servers that require the added bandwidth.
Alternative C: Pervasive ATM
This alternative represents where the Task Force believes Penn should
be in
3-5 years. This is mostly dependent on the necessary investment level,
but
even more important on the development of products and standards in the
marketplace to make deployment of or migration to this alternative
possible.
Major features include:
- Inter-building backbone: Redundant central hubs, with automatic
failover
protection, use ATM switching between buildings coupled with a "super
router" to route between subnets. An ATM "mesh" is established with some
buildings serving as regional "hubs" redundantly interconnected.
- Intra-building backbone: Ethernet switches, eventually with ATM
and/or
distributed routing support ("edge routing devices"), are deployed
everywhere to reduce the size of the collision domain within buildings
and
provide a scalable building interconnection.
- Wiring strategies and standards: EIA/TIA 568 continues to be the
wiring
standard. Ethernet switches are deployed within buildings as bandwidth
requirements demand. Secure hubs prevent promiscuous listening on the
few shared Ethernet segments that remain. Some 100-Base-X fast Ethernet
outlets are deployed. Campus Ethernet connections begin to migrate
towards
"personal Ethernet" which allows local hubs on 10-base-T or fast Ethernet
outlets. Limited deployment of ATM to the desktop.
Three Additional Variations
Three additional architectural alternatives recognize that the
marketplace
may not develop in the directions we expect, and/or Penn may need to
improve the performance of PennNet in advance of the availability of
components to build Alternative C.
Alternative A': Pervasive 100+ Mb Backbone
In most respects this alternative is identical to Alternative A, except that in
this case there is the need for all buildings to be connected to the
campus
backbone using a 100+ Mb/sec technology. To accommodate this bandwidth to
every building, the campus backbone needs to change: the collapsed
backbone
is now interconnected via ATM switch to increase capacity at the core.
Subnets are connected to central routers via shared or dedicated
connections
using 100+ Mb/sec technology.
Alternative AB': Distributed Routing with 100+ Mb Backbone
If the availability of the products needed to implement Alternative C
becomes
more distant,
this alternative may provide some necessary solutions. It
provides for a regionalized campus with several clusters of buildings
connected together via 100+ Mb/sec technology, and fully distributed
routing
to each building.
Major features include:
Inter-building backbone: FDDI switch deployed to handle multiple FDDI
rings (or other 100+ Mb/sec technology) required in this architecture.
Central
routing is only for outside or inter-campus connections.
Intra-building backbone: Clusters of buildings are connected to the
backbone
via 100+ Mb/sec technology (FDDI or fast Ethernet) for increased
bandwidth.
Ethernet or fast Ethernet switches deployed in all buildings reduce the
size of
the collision domain and provide a scalable building interconnection.
Alternative B': Selective ATM
This alternative allows the campus to migrate more slowly to ATM for
inter-
building connections.
Major features include:
- Inter-building backbone: Central hub starts to migrate to ATM switch
coupled with a "super router" to route between subnets. Routing switches
at
the core start to distribute routing load. Some routers stay on old FDDI
backbone providing connections for some buildings. Some direct ATM
connections to core permitted as routing switches begin to migrate to
buildings.
- Intra-building backbone: Buildings are connected to the backbone via
10
Mb/sec or 100+ Mb/sec technology (FDDI or fast Ethernet) for increased
bandwidth. Some buildings are connected to the backbone directly via ATM
and have edge routers installed. Ethernet or fast Ethernet switches
deployed
in all buildings reduce the size of the collision domain and provide a
scalable
building interconnection.
Network Alternative Pro's and Con's
Network Alternatives "Pro's &
Con's" |
---|
Alternative |
"Pro" |
"Con" |
Alternative A |
easy to implement today
less expensive compared to other alternatives |
perpetuates use of today's technologies |
Alternative B |
starts down the road to distributed routing
may reduce per port costs of central routers and increase
overall bandwidth of the routing core |
significantly newer technologies in the routing core
which use proprietary protocols for distributed routing
depending on timing, ATM may still be too immature |
Alternative C |
probably where we want to be... |
can't buy it today
presumes a lot about market directions |
Alternative A' |
will definitely increase bandwidth to buildings today |
very expensive
perpetuates use of today's technologies |
Alternative AB' |
will definitely increase bandwidth to buildings today |
very expensive
perpetuates use of today's technologies
introduces additional operations issues as routing devices
are distributed to each building |
Alternative B' |
allows for slower migration to ATM |
multiple generations of backbone technology difficult
(and expensive) to operate and maintain |
Next steps
The University's department of
Data Communications and Computing Services (DCCS),
in conjunction with the Network Architecture Task Force, is
currently processing the above information and carrying out the following
tasks:
- Accelerating legacy phase outs, notably ISN and terminal-server
asynchronous services, closet electronics that are not remotely
manageable,
and obsolescent server architectures
- Pricing the alternatives
- Narrowing the set
- Engaging the campus stakeholders
- Interim deployment of Ethernet switches and other technologies
- Structured consultations with current and prospective vendor partners
- Constant re-assessment, including consultation with university colleagues
In addition, discussions are underway to
- Extend the
Penn Video Network,
now serving 50 buildings, including 16
residences, to other buildings on-campus and off
- Assess extension of the Bell Atlantic Centrex contracts for the
University
and Hospital versus purchase of one or two switches
- Determine the likely time frame by which ATM can function as the
ultimate, voice-data-video integrator
Conclusions
Designing and deploying a cost-effective, high-performance, campus-wide
networking infrastructure is extremely challenging, given the rapid pace
of technological change, user demand, and vendor reshuffling. At Penn, the
challenge is multiplied by our decentralized management, budgeting, and
academic computing structures. It is becoming increasingly clear to most
stakeholders, however, that in networking, we must, as our founder Ben
Franklin exorted, "hang together, or we will most assuredly hang
separately." The productive collaboration exemplified by the Network
Architecture Task Force bodes well for Penn's networking future.
This paper, links to technical diagrams and other information presented at
CAUSE95, and a link to the CAUSE94 paper are available on the World Wide
Web at URL,
Please address comments or questions to Dr. Noam Arzt, arzt@isc.upenn.edu
or Mr. Daniel Updegrove, dan_updegrove@yale.edu
[12/12/96]
URL: http://www.hln.com/noam/cause95.html