SC97: Research Exhibits

SC97 Research Exhibits





Abstracts


The Aggregate
Booth R110

Over the past four years, a new model for parallel processing communication has emerged. Neither message-passing nor shared memory, the aggregate function model is based on performing N-way communication functions as single operations. Although this model began at Purdue University as the library for PAPERS (Purdue's Adapter for Parallel Execution and Rapid Synchronization) clusters, this model has become the highly-portable Aggregate Function API (AFAPI), and has spread to a number of other institutions. This exhibit will present AFAPI-related research from several universities, including new hardware and software for clustering of COTS PCs. Multiple parallel systems using AFAPI, including at least one video wall, will be demonstrated.



Ames Laboratory
Booth R101

The cost-effectiveness of commodity components for computing and networking hardware and software has made major strides in the last few years. The level of computing resources that may now be dedicated to departmental level projects is unprecedented, rivaling the capabilities of national centers just a few years ago. Ames Laboratory has considerable experience locating and removing a variety of performance bottlenecks from clusters using ATM and Fast Ethernet. This year's research exhibit will highlight the analysis and performance of a cluster of high performance commodity workstations using commodity software and connected using Gigabit Ethernet technology. A Packet Engines full-duplex repeater will provide the key element of a new low-cost gigabit interconnect. NetPIPE, the network performance application tool developed at Ames Laboratory, will demonstrate the effectiveness of this interconnect as will a selection of Materials Science Grand Challenge applications.



Argonne National Laboratory
Booth R111

Argonne National Laboratory continues to provide portable and scalable software for large-scale computational applications, for the effective use and management of geographically distributed supercomputing resources, for immersive visualization, and for advancing the concept of "national collaboratories." Showcased will be current work in areas including:



Army High Performance Computing Research Center
Booth R303

The Army High Performance Computing Research Center (AHPCRC), located at the University of Minnesota in Minneapolis, Minnesota, was established by the US Army. Our mandate is to establish university-Army collaborative research programs to advance the science of high performance computing and its application to critical Army technology issues and to maintain leadership in computing technology vital to national security. The research exhibit will demonstrate the results of AHPCRC research projects that use the TMC CM-5 and the Cray T3D for large simulations. Highlighted will be basic research projects, collaborative research projects involving both AHPCRC and Army researchers, and educational programs. The AHPCRC research exhibit will utilize several methods of information presentation; high performance workstations for research project data visualization demos, a VCR and large-screen monitor for showing narrated videotapes of AHPCRC research project results, and poster-size pictures of selected research project graphics mounted in the exhibit.



ASCI, LLNL, LANL, and SNL
Booth R104

In this coordinated exhibit, three Department of Energy laboratories present demos, posters, and videos related to work at the individual laboratories, as well as the shared DoE Accelerated Strategic Computing Initiative (ASCI) program. Sandia is forcing a revolution in engineering by combining Teraflops-scale simulation with world-class experimental facilities to predict material responses and processes more complex than ever before possible in programs from microelectronics and manufacturing to critical infrastructure surety. LANL will feature simulations of the global climate modeling Grand Challenge application and wildfire predictability. The ongoing software efforts of the POOMA framework and TeleMed collaboration will be demonstrated, as well as the prototype HIPPI-6400 tester. LLNL features information about the laboratory, current tera-scale computational upgrades, and presentations from computational and networking research. Planned demos include Incident Advisory, Python for scientific computing, gang scheduling/parallel tools, and remote visualization over high-speed networks.



Boston University
Booth R105

Boston University's research exhibit features its NSF-funded project MARINER: Metacenter Affiliated Resource In the New England Region. MARINER extends the University's efforts in advanced scientific computing and networking to organizations throughout the region. Users from both public and private sector are eligible to participate in a wide range of programs that offer training in and access to advanced computing and communications technologies. Demonstrations of current research and educational projects developed through the Center for Computational Science and the Scientific Computing and Visualization Group will be shown using graphics workstations and video in the exhibit booth. In addition, we will present distributed supercomputing applications, video animations of recent research, and 3-D visualizations using a stereoscopic display. We will also be announcing our new role as a partner institution in the National Computational Science Alliance under NSF's PACI program.



Brookhaven National Laboratory
Booth R207

A Stereographic Collaborative Visualization Facility and its Applications: with the unmistakable trend of scientific collaborations becoming larger and more geographically dispersed, new emphasis is being placed on the technologies that enable meaningful distance interaction. High-quality visualization facilities are of great importance in the conduct of such collaborations. Even for local staff, visualization is a key to enhanced comprehension and insight into the results of experiments. Brookhaven National Laboratory provides an example of such a facility along with its current and emerging scientific applications. This research exhibit will provide a sampling of the stereographic viewing experience with applications from a variety of scientific disciplines. Among the applications are those in medical imaging, protein chemistry, and the geology of oil exploration and environmental remediation. http://www.ccd.bnl.gov/visualization/examples.html



California Institute of Technology
Booth R310

The California Institute of Technology (Caltech) will highlight ongoing investigations by researchers at Caltech's Center for Advanced Computing Research (CACR) and the Jet Propulsion Laboratory (JPL). Among these projects are Petaflops computing, Beowulf/Clusters of PCs, SF/Express, various Grand Challenge Applications, the Caltech/Hewlett-Packard collaboration, and LARGE‹the Los Angeles Regional Gigabit Environment. CACR's activities in two national-scale, multidisciplinary collaborations will also be featured: the National Partnership for Advanced Computational Infrastructure (NPACI) funded by the NSF and a computational facility for simulating the dynamic response of materials to shock waves, an ASCI Center of Excellence funded by the DoE Accelerated Strategic Computing Initiative (ASCI). Graphics workstations linked to the 256 processor HP Exemplar and an IBM HPSS at Caltech will be used to demonstrate Grand Challenge Applications. A Beowulf cluster in the research exhibit will demonstrate applications and be used to illustrate the construction of Beowulf systems.



Center for Research on Parallel Computation
Booth R214

The Center for Research on Parallel Computation (CRPC), an NSF STC dedicated to making massively parallel computing truly usable, includes researchers at two national laboratories (Argonne and Los Alamos) and five universities (Caltech, Rice, Syracuse, Tennessee, and Texas). The CRPC also has affiliated sites at Boston, Drexel, Illinois, Indiana, and Maryland Universities, University of Houston, and the Institute for Computer Applications in Science and Engineering (ICASE). The CRPC exhibit features technologies the center is transferring and the mechanisms used to transfer them. Technologies demonstrated will include interactive software systems, descriptions of parallel language extensions, and applications developed by CRPC researchers. These include HPF, Fortran D, the D System, CC++, PVM, HeNCE, parallel templates, ADIFOR, and applications, including HPCC technologies in education. Transfer mechanisms include DoD Modernization, PACI, NHSE, HACSC, and CRPC educational programs. Demos, posters, videos, and information about outreach activities, software distribution, technical reports, and knowledge transfer efforts will be available.



DoD HPC Modernization Program
Booth R302

The DoD High Performance Computing Modernization Program (HPCMP) is a multi-year, $1.2B initiative to modernize HPC and advanced networking capabilities for the DoD's research programs. The research exhibit will consist of a research and technology demonstration area and an adjacent area housing the exhibit's computational resources. The research exhibit will be used to highlight various components and initiatives of the HPCMP. The booth will include four major elements: interactive demonstrations that use ATM OC-3 interconnects with computational resources located at remote DoD sites, video reports of large-scale numerical modeling efforts, a technology demonstration using an interactive flight simulator, and a collaboration area containing contributions to DoD's HPC efforts produced by the program's academic and industrial partners.



DoE 2000
Booth R318

DOE2000 is a Department of Energy initiative to develop advanced computing and collaboration technologies for research and development. There are three components: 1) Advanced Computational Testing and Simulation (ACTS): Developing an integrated scientific software toolkit. 2) National Collaboratories: Laboratories without walls that connect researchers, instruments, and computers nationwide. 3) Pilot Projects: Early implementations of virtual laboratories. Demonstrations will include:



Electrotechnical Laboratory
Booth R212

This research exhibit presents some results from a number of related research projects in the Electrotechnical Laboratory in the areas of high performance computing and communications. We will present video and live demonstrations of programs on the following research results:



Emory University
Booth R215

CCF (Collaborative Computing Frameworks) is a suite of software systems and tools, communications protocols, and methodologies that enable collaborative, computer-based cooperative work. CCF constructs a virtual work environment on multiple computer systems connected over the Internet, to form a collaboratory. Participants may interact, share applications and data repositories or archives, collectively create and manipulate documents and spreadsheets, perform computational transforms, and conduct a number of other activities via telepresence. CCF is an integrated framework that consists of multiple coordinated infrastructural elements, each of which provides a component of the virtual collaborative environment. A prototype of a complete collaboration system will be exhibited with live demonstrations, and will include the underlying reliable multicast protocol suite, application sharing and X-multiplexer systems, shared dataspace and filesystem, the computing harness, and the clearboard, audiotool, multiway chat, and video conferencing tools. http://ccf.mathcs.emory.edu/



Esprit HPF+ Project
Booth R307

The Esprit IV project "HPF+: Optimizing HPF for Advanced Applications" aims to improve the HPF language and related compilation technology extending the functionalities of HPF and developing compilation strategies based on the requirements of a set of advanced applications. The purpose of the exhibit is to demonstrate the results achieved in HPF+ with focus on benchmark development in HPF+ based on real scientific applications, the Vienna Fortran Compiler (VFC) and an evaluation of the effectiveness of HPF+ and its implementation. The compiler and runtime technology developed in the VFC includes the general block distributions, INDIRECT distributions, ON clauses and schedule reuse as well as the Inspector-Executor parallelization strategy. The evaluation of HPF+, by means of the performance tool MEDEA, provides a comparison of the developed technology with HPF-2 and message passing approach.http://www.par.univie.ac.at/hpf+



Fermi National Accelerator Laboratory
Booth R216

High Energy Physics (HEP) requires massive computational resources for data acquisition; design, control, and analysis of experiments; and, for theoretical analyses. To improve our understanding of fundamental physics, the CPU requirement will increase by more than an order of magnitude in the next two years. The increasing power of commodity computing hardware and network technology allows scientists to use clusters of relatively inexpensive computers for HEP applications. Fermi National Accelerator Laboratory will demonstrate two prototype clusters under consideration for production systems, both based on Intel microprocessors, running the Linux operating system. One receives, analyzes, and filters data at 100 MBytes/sec from the CDF Collider Experiment (http://www-cdf.fnal.gov/) using ATM and Fast Ethernet technologies. The second, a prototype PC FARM (http://www-ols.fnal.gov/pcfarms/) analyzes HEP data using (http://fnhppc.fnal.gov/farms/farmspage.html) loosely-coupled parallel processing techniques developed at FNAL. Both systems execute FNAL codes ported to Linux, and both demonstrate the scalability of such commodity-based computing for HEP tasks. http://www.fnal.gov/



High Performance Computing Center, Stuttgart
Booth R304

The High Performance Computing Center Stuttgart (HLRS, a division of RUS, the computing center of Stuttgart University) is a National HPC Center for Germany. Embedded in industrial cooperations, it focuses on engineering applications. We will showcase our work under the G7 GIBN initiative, done with the Pittsburgh Supercomputing Center (PSC) and Sandia National Laboratory (SNL). We have set up a metacomputing environment that links up 512 processor Cray T3E's at HLRS and PSC and a Paragon at SNL, via a transatlantic ATM link, vBNS and ESnet. This is complemented by collaborative visualization in a distributed virtual environment, and applied to: fluid dynamics simulations of the re-entry of a space vehicle; the entry of meteorites into the Earth's atmosphere as well as their impact on the Earth; and ab-initio calculations of magnetic properties of alloys. Results will be visualized in a distributed virtual environment between HLRS, PSC, and SNL.



Indiana University
Booth R315

Indiana University has a vigorous program in HPC research and instruction. Three projects of note will be presented in this exhibit:



Institut National de Recherche en Informatique et en Automatique (INRIA)
Booth R305

This exhibit covers the following topics of INRIA's research in high performance computing:

http://www.inria.fr/welcome-eng.html
http://www.irisa.fr/caps/PROJECTS/Salto/
http://www.irisa.fr/caps/PROJECTS/TSF/
http://www.inria.fr/sloop/c++ll
http://www.inria.fr/sloop/javall
http://www.inria.fr/sloop/prosit
http://www.inria.fr/croap/eiffel-ll



Krell Institute
Booth R312

The research at the Krell Institute focuses on applying modern computing, communication, and information technologies to educational, environmental, and energy-related priorities. The SC97 research booth will feature examples of employing information technologies to enhance computational science curriculum that spans middle school through undergraduate studies. The K-12 Adventures in Supercomputing (AiS) program was designed to introduce computational science in the context of a project-based, learner-centered curriculum that integrates mathematics and science with technology. The Undergraduate Computational Engineering and Science Project (UCES) was designed to encourage and support the infusion of computational science into the undergraduate curriculum. The UCES project is made up of three components: a web-based archive of education materials, authoring tools, and a professional recognition program. The 1997 UCES awards will be presented at SC97. These activities are supported by the Department of Energy. http://www.krellinst.org



Lawrence Berkeley National Laboratory
Booth R200

When Ernest Orlando Lawrence founded America's first national laboratory in Berkeley, he redefined the way scientific research is conducted by fostering multidisciplinary research teams. Today the Computing Sciences organization at Lawrence Berkeley National Laboratory is redefining the way scientists will work in the next century by developing new tools for computation and collaboration. Demonstrations will include:



Massachusetts Institute of Technology
Booth R102

Project Bayanihan studies the idea of volunteer computing, which enables people to join a large parallel computation by simply visiting a Web site. Because volunteering requires no prior human contact and very little technical knowledge, it becomes very easy to build very large computing networks. This creates exciting new possibilities. With true volunteer systems, one can reach new heights in performance by using many thousands of anonymous volunteer nodes around the world. On a smaller but more practical scale, companies or institutions can use forced volunteer systems to pool together their internal computing resources with minimal administration costs. Paid volunteer systems are also possible, wherein volunteers are somehow compensated for their participation, or allowed to engage in the barter trade of processing cycles. Currently, we have developed a flexible software framework using Java and HORB, and are using it to study issues such as adaptive parallelism, security, and user-interface design. http://www.cag.lcs.mit.edu/bayanihan/



MHPCC/HPCERC/AHPCC
Booth R306

The High Performance Computing Education and Research Center (HPCERC), a strategic research center at the University of New Mexico, initiates and coordinates education and research programs in high performance computing. A major program of HPCERC is the Maui High Performance Computing Center (MHPCC), a leader in scalable computing and application technology. MHPCC supports the transition of projects from initial concept through production for Department of Defense, government, commercial, and academic users. Another key program of the HPCERC is the Albuquerque High Performance Computing Center (AHPCC), which provides an high performance computing environment for basic research and education at the University of New Mexico. This booth will highlight key projects of MHPCC and AHPCC. This includes image enhancement research to support the Air Force's electro-optical telescopes and the development of parallel models to support disaster planning. There will also be a demonstration of the Maui Scheduler, an advanced systems software tool developed to address requirements of MHPCC's large IBM SP system.



Mississippi State University
Booth R107

The ERC is a multi-disciplinary center that puts high performance computers to work in a number of ways. This booth illustrates a cross-section of these activities, and emphasizes practical uses of existing and emerging computational and visualization technology. In all, software techniques, algorithms, and applications are the emphasis, not the raw hardware, which can do little without the added value of the technology to be displayed.



National Aeronautics and Space Administration
Booth R319

The National Aeronautics and Space Administration's SC97 research exhibit highlights some of the most significant high performance networking and computing research projects underway at five field installations: Ames Research Center, Goddard Space Flight Center, Jet Propulsion Laboratory, Langley Research Center, and Lewis Research Center. Coordinated under the booth theme "Next Generation Supercomputing," the exhibit features real-time and interactive demonstrations of advanced networking applications; advanced computing architectures, including distributed heterogeneous systems; advanced operating systems and software tools for parallel systems; and advanced applications, including multidisciplinary aerodynamic design and optimization, global climate modeling, and planetary and cosmological phenomena simulation. A stereo theater graphically depicts various elements of NASA's next-generation supercomputing research, and shows other segments such as high definition rendering of the Martian landscape. A virtual reality workbench demonstration plus videos and static displays of other research round out the exhibit.



National Aerospace Laboratory of Japan
Booth R100

National Aerospace Laboratory(NAL) of Japan is a national research institute devoted to aerospace technology. NAL has been one of the most advanced HPC users in Japan. In 1993, NAL developed a special-purpose high performance computer: Numerical Wind Tunnel (NWT) jointly with Fujitsu. NWT's main target is to serve as a flow solver of Computational Fluid Dynamics (CFD) applications for national aerospace development projects. It is one of the top high performance computers, consisting of parallelly connected 166 vector processors each of which has performance of 1.7 GFLOPS and 256 MB memory. The distinctive architecture is most fit to efficiently process multiple vector do loops that are found in Navier-Stokes solvers. Thus sustained performance of 111 GFLOPS was realized in a 3-D compressor blade analysis code which was awarded the Gordon Bell Prize in 1996. Recently joint researchers with universities and industries and international collaborations are increasing in number. Easy and high speed access to NWT from outside is requested. In the exhibit, Remote NWT Access Utility, which runs on a Web client, will be demonstrated. NWT system and recent CFD accomplishment will be also on display.



National Center for Atmospheric Research
Booth R219

The National Center for Atmospheric Research (NCAR) exhibit will highlight investigations that use a broad range of supercomputing technologies to gain insight into real-world phenomena. These insights often lead to a better understanding of the phenomenon itself, and may provide the means to develop predictive capabilities based on fine tuning of the models executed on NCAR's supercomputers. Putting supercomputers to work on problems that may result in immediate benefit for humanity is a major objective of the scientific and research staff at NCAR. The computers used are some of the most sophisticated and fastest machines available. Model output, when combined with high-end visualization capabilities, frequently yield stunning images that further assist in the interpretation and analysis of the results of the model's execution. For 1997, NCAR's research exhibit will focus on several research projects that required long-running simulations on NCAR's supercomputers. In addition, this year will mark the debut of the Scientific Computing Division's Visualization Theater to display in 3-D format the visualized output of these (and other) models. The exhibit will feature four supercomputing components: NCAR's Climate Simulation Laboratory's (CSL) simulation of a 125-year build up of carbon dioxide in the atmosphere, a revised and terrain-enhanced visualization of a coupled atmosphere-fire model, real-time execution of NCAR's CCM2 (or MM5) climate model(s) on the HP 2000 Exemplar machine, and turbulence over mountainous terrain.http://www.scd.ucar.edu/info/SC97/



National Computational Science Alliance
Booth R114

The National Computational Science Alliance (the Alliance) is a partnership of individuals and institutions prototyping the nation's advanced computational infrastructure for the 21st Century. The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign anchors the Alliance, which is funded by the National Science Foundation's Partnerships for Advanced Computational Infrastructure program. The Alliance is organized into four teams: Application Technologies (AT), Enabling Technologies (ET), Regional Partners, and Education, Outreach and Training (EOT). AT teams will demonstrate how the Alliance attacks large-scale problems of science and engineering and drives technology development in cosmology, chemical engineering, environmental hydrology, molecular biology, nanomaterials and scientific instrumentation. ET teams will show how tools and infrastructure are being developed to benefit science. The booth will also include information about Regional Partners, who help distribute the resources of the Alliance nationally, and EOT teams, who bring new technologies to schools, the industrial sector, and under-served populations.



National Coordination Office for Computing, Information, and Communications
Booth R106

The National Coordination Office for Computing, Information, and Communications (NCO/CIC) coordinates multi-agency CIC R&D activities in computing, information, and communications. These activities are organized into five Program Component Areas: High End Computing and Computation; Large Scale Networking; High Confidence Systems; Human Centered Systems; and Education, Training, and Human Resources. Our booth will highlight NCO activities and CIC R&D conducted by 12 participating organizations‹DARPA, NSF, DoE, NASA, NIH, NSA, NIST, ED, VA, NOAA, EPA, and AHCPR‹through exhibition of representative results from agency-sponsored projects. NCO publications, including the High Performance Computing and Communications (HPCC) Program's FY 1998 Annual Report to Congress (the Blue Book), FY 1998 CIC brochure, and FY 1998 HPCC Implementation Plan, will be distributed. The booth will also highlight the activities of the recently established Presidential Advisory Committee on HPCC, Information Technology, and the Next Generation Internet. Materials from the Committee's first two meetings will be available. Materials from activities of other organizations supported by the NCO, such as the Applications Council and the new Technology Policy Subcommittee (TPS), will also be available. (The CIC R&D Subcommittee, Applications Council, and the TPS report to the Committee on Computing, Information, and Communications (CCIC), under the National Science and Technology Council (NSTC)).



National Partnership for Advanced Computational Infrastructure
Booth R112

NPACI is a new national organization supported by the NSF's Partnerships for Advanced Computational Infrastructure initiative. NPACI builds on the foundation of the San Diego Supercomputer Center and brings together more than three dozen academic, industrial, and research institutions in 18 states to make the world's most advanced computational resources and applications available to the nation's scientists and engineers. NPACI's efforts are concentrated into nine major areas: Enabling Technologies, Data-Intensive Computing Enabling Technologies, Interaction Environments Enabling Technologies, Adaptable, Scalable Tools and Environments High Performance Computing Resources Applications, Molecular Science Applications, Neuroscience Applications, Earth Systems Science Applications, Engineering, and Education, Outreach, and Training. The booth exhibits will demonstrate new tools and applications being developed by the cooperating partners, will present NPACI resources in action through "transparent supercomputing," and will show how the partnership's applications, research projects, and services are meeting real needs of the computational science community. http://www.npaci.edu



National Scalable Cluster Project (NSCP)
Booth R301

We will demonstrate software tools and applications that have been developed by the National Scalable Cluster Project (NSCP) and the National Data Mining Laboratory (NDML) related to data mining and data-intensive computing. The NSCP is a collaboration between research groups at the University of Illinois at Chicago (UIC), the University of Pennsylvania (UPenn), and the University of Maryland at College Park (UMD), which focuses on technologies related to cluster computing. The NDML is a collaboration headquartered at UIC focusing on data mining, data-intensive computing, and related technologies. During SC 97, we will demonstrate a variety of applications involving data mining and data-intensive computing running on local- and wide-area clusters of workstations. Applications include mining scientific, engineering ,and medical data. Workstation clusters at the exhibit and at the sponsoring institutions will be connected using ATM and vBNS links.



NOAA/Forecast Systems Laboratory
Booth R103

The mission of NOAA's Forecast Systems Laboratory (FSL) is to evaluate and transition technology to the operational weather services. FSL has been evaluating the appropriateness of different computing architectures for their use in real-time numerical weather prediction. To that end, FSL/ACB has developed a software toolbox known at the Scalable Modeling System (SMS) to parallelize numerical weather prediction (NWP) models. FSL has parallelized several NWP models using SMS. The focus of our research exhibit will be the running of real-time NWP models on a number of architectures. The forecasts will be run on our SGI Origin 2000 and Intel Paragon at FSL as well machines on the SC 97 exhibit floor. The forecasts will be visualized in our booth using simple animated 2-D displays of temperature and precipitation as well as animated 3-D displays of more sophisticated phenomena.



North Carolina State University
Booth R108

The purpose of the exhibit is to present the content and technology that supports the Regional Training Center for Parallel Processing (RTCPP). RTCPP was established by the North Carolina State University (NCSU) with NSF support. RTCPP provides an advanced network-based environment for learning about parallel computing, high performance networking, and computational sciences in general. The environment has facilities for construction of customized lessons and courses based on a collection of re-usable lesson elements (objects), as well as real-time capture of the materials. RTCPP is also one of the principal NC State University participants in the development of the regional GigaPOP and Internet 2 Test-Bed. The RTCPP education and training library contains material in a variety of formats, from videotapes, to Web-based streaming media material, to MPEG-2 based material. RTCPP will showcase the latest version of its Internet-2 oriented Web Lecture System. http://renoir.csc.ncsu.edu/RTCPP
http://renoir.csc.ncsu.edu/WLS



Northwest Alliance for Computational Science and Engineering
Booth R203

Want to learn high performance computing without becoming a computer scientist? NACSE provides Web-based training materials and tools aimed at scientists, engineers, and students. Demonstrations will include:



Oak Ridge National Laboratory
Booth R300

The ORNL exhibit features the application of high performance computing, networking, and storage to real-world problems. Of special interest are the latest results from ongoing research in large-scale distributed computing. In collaboration with SNL and PSC, we are using PVM software and ATM hardware over ESnet to run a huge problem on geographically distributed multiple supercomputers. Along with the outstanding high performance computing environment provided by the Department of Energy (DoE) through our Center for Computational Sciences (CCS), ORNL provides leadership in Grand Challenge computing. ORNL high performance computing activities include: developing accurate mathematical models of complex phenomena, creating scalable algorithms for their solution, developing and operating high performance computing and storage environments, generating data management and analysis methods, developing software tools, and implementing communications technologies. The Computational Center for Industrial Innovation, a DoE National User Facility, provides an effective mechanism for collaborative research between industry and ORNL. http://www.ccs.ornl.gov/SC97/SC97.html



Ohio Supercomputer Center
Booth R204

The Ohio Supercomputer Center (OSC) is Ohio's technology leader. A statewide focus on networking initiatives is moving Ohio to the national forefront in communications and the Center continues to provide state-of-the-art scalable resources for computer modeling and simulations. The Center will present an interactive exhibit at SC97 highlighting diverse network research and development applications including projects from OCARNet, a seven-university ATM testbed. The Center also will showcase the work of Ohio researchers such as droplet deformation and breakup using Lattice-Boltzmann Method (LBM), which will enhance understanding of multi-phase flow dynamics; and an endoscopic sinus surgery simulation that promises to improve medical training. Best practices in Web-based education and training, arising from OSC's involvement with Ohio universities and in the DoD HPC Modernization Program and the NSF's Partnerships for Advanced Computational Infrastructure (PACI) program as part of the National Computational Science Alliance (NCSA) effort will be featured. Center activities also will be highlighted in the DoD and the NCSA booths.http://www.osc.edu



Pacific Northwest National Laboratory
Booth R317

Productive use of the massive parallel advanced computing resources available to research scientists requires not only a revolution in computational methods for efficient use on these systems, but also a corresponding revolution in software tools for managing complex computational experiments, remote communication access, analyzing, tracking, and managing large complex input and output data sets. The Environmental Molecular Sciences Laboratory (EMSL) is coming on-line at the Pacific Northwest National Laboratory (PNNL) in Richland, Washington. This new national user facility and collaboratory is focused on environmental molecular science. EMSL has a new 512 node IBM SP system along with and EMASS Data Storage system located in the Molecular Science Computing Facility (MSCF). We have been developing software tools to effectively and efficiently utilize this new resource as well as other computational resources made available by the Department of Energy.



Parallel Tools Consortium
Booth R201

Recent investigation has shown that, despite increasingly vocal demands from the HPC community for software support, parallel tool use within that community remains disappointingly low. The Parallel Tools Consortium (Ptools) brings together researchers, developers, and users from the federal, industrial, and academic sectors to improve the responsiveness of parallel tools to user needs. Ptools has assumed a leading role in the definition, development, and promotion of parallel tools that meet the specific requirements of users who develop scalable applications on a variety of platforms. Additionally, the High Performance Debugging Forum (HPDF) is a recent effort sponsored by Ptools. The HPDF goal is to define standards relevant to debugging tools for HPC systems. The HPDF effort, as well as other Ptools activities and prototypes, will be the highlight of the Ptools research exhibit at SC97. Members of the Ptools Consortium Steering Committee (which includes national laboratory technical staff, university researchers, software tool developers, and computer hardware vendors) will be available to discuss the Ptools mission and activities and to further promote the dialog between tool users and tool developers.http://www.ptools.org/



Pittsburgh Supercomputing Center
Booth R213

Booth The Pittsburgh Supercomputing Center (PSC) is an independent national supercomputing center dedicated to providing academic, industrial, and government researchers with access to state-of-the-art high performance computing and communication resources. In addition, we provide our researchers with a flexible environment that is conducive to solving today's largest and most challenging computational science problems. Our research exhibit will demonstrate the capabilities of our resources, which include the latest massively parallel systems (Cray T3E/LC512 and T3D/MC512) as well as traditional parallel vector platforms (Cray C916 and J90s). We will feature a wide variety of demonstrations designed to showcase award-winning PSC research; particular areas of focus include weather modeling, seismology, atmospheric science and computational biomedical research such as structural biology, pathology and fMRI. We will also highlight our participation in an interesting metacomputing project in collaboration with Sandia National Laboratory and the High Performance Computing Center at Stuttgart, Germany.



Real World Computing Partnership
Booth R218

We have been developing a parallel programming environment on workstation and PC clusters. A programming language MPC , an operating system SCore-D, and a low-level communication library PM have been co-designed to achieve time-sharing high performance computing environment on clusters. Our latest PC cluster, consisting of 64 Pentium PRO processors connected by Myrinet, will be brought to the exhibits. We will not only demonstrate the ability of our system software but also show our parallel applications. One of our parallel applications is an integrated system for protein structure analysis. This system provides researchers of structural biology with a framework of parallel programming in MPC on SCore-D for their particular protein structure analysis. The demonstration shows, as an example, the performance of searching thousands of proteins in Protein Data Bank (PDB) for a homologous (i.e. similar) protein structures and/or sequences to a query protein, on the PC Cluster.



Saitama University
Booth R202

The main features of the research exhibit of Saitama University are: educational program using visualization tools such as AVS and VRML, research activities using vector and parallel supercomputers, and activities in scientific visualization and networking. The 3-D graphics and virtual reality systems are intensively regarded as advanced tools applied to educational programs and research. Applications such as AVS, VRML, and others are used in actual exercises for first and second grade students, where it was found that the younger students are ready to accept the virtual reality system or 3-D environment. This may be due to their popular experiences on 3-D video games at home. High performance computing has been carried out by using vector and parallel computers in the fields of scientific and engineering studies, and has been shown to give detailed understandings, particularly when visualizations are included. Analysis using motion pictures has been examined and applied in the fields of sporting education and others such as welfare.



SC97 Education Program
Booth R210

Booth The SC97 Education Program is designed to give K-12 teachers, administrators, university professors, educational software researchers and developers, community technology leaders, and others three days of intense exposure to the tools and techniques of high performance computing and high-speed networking as they can be applied to education. Drop by this exhibit to learn about the SC97 and SC98 Education Programs, and to find out how to become involved in outreach and education programs nationwide. http://www.supercomp.org/sc97



SC97 Research Exhibits
Booth R314

Drop by to learn how you can participate in a research exhibit at SC98, which will take place November 7-13, 1998 in Orlando, Florida, USA.
http://www.supercomp.org/sc97
http://www.supercomp.org/sc98



Scalable I/O Initiative
Booth R308

To achieve balance between compute power and I/O capability, the Scalable I/O Initiative has adopted a system-wide perspective to analyze and enhance the diverse, intertwined system software components influencing I/O performance. Using the PABLO performance analysis environment (http://www-pablo.cs.uiuc.edu/Projects/Pablo/pablo.html), application developers and computer scientists have cooperated to determine the I/O characteristics of a comprehensive set of I/O-intensive applications. These characteristics have guided development of parallel I/O features for related system software components: compilers, runtime libraries, parallel file systems, high performance network interfaces, and operating system services. The resulting software, built upon results from ongoing research projects sponsored by DARPA, DoE, NASA, and NSF, will be demonstrated using workstations and an Intel Paragon. Participating institutions include:

http://www.cacr.caltech.edu/SIO



University of Alaska Fairbanks
Booth R211

The Arctic Region Supercomputing Center (ARSC) supports the computational needs of academic, industrial, and government scientists and engineers with HPC resources, programming talent, technical expertise, and training. Scientific research performed at ARSC includes ocean modeling, atmospheric sciences, climate/global change, space physics, satellite remote sensing, and civil, environmental, and petroleum engineering. ARSC provides computational resources for research with emphasis on the high latitudes and the Arctic. ARSC, located at the University of Alaska Fairbanks, operates a CRAY Y-MP M98, a CRAY T3E and a network of SGI workstations in three visualization laboratories across the campus. Exhibit displays include: Arctic Ocean circulation and ice movement models, Alaska fly-by virtual reality based on SAR and satellite data, atmospheric modeling for weather prediction and contaminant transport, modeling of ionospheric dynamics during magnetic storms, and simulation of permafrost stabilization through subsurface cold air circulation.
http://www.arsc.edu
http://www.uaf.edu



University of California, Santa Barbara
Booth R109

Javelin: Internet-Based Parallel Computing Using Java. We present our research on a distributed, heterogeneous, high performance infrastructure for running coarse-grained parallel applications on Intranets or potentially even the Internet. Our approach is based on recent advances in Internet connectivity, and environments that allow for the safe execution of untrusted code such as the Java VM. We propose an architecture that uses supply and demand and market mechanisms to motivate individual users to offer their resources. Our approach has the potential for running parallel supercomputing applications involving thousands of anonymous machines on the Internet. We have implemented a prototype based on Java that clearly shows the feasibility of our approach. Our system‹called Javelin‹is built on Internet software that is interoperable, increasingly secure, and essentially ubiquitous: it requires participants to have access only to a Java-enabled Web browser. A Javelin-based parallel raytracer running on a heterogeneous network will be demonstrated.
http://www.cs.ucsb.edu/~schauser/papers/96-superweb.ps
http://www.cs.ucsb.edu/~schauser/papers/97-javelin.ps



University of Sao Paulo--Integrated Systems Laboratory
Booth R316

SPADE-2 is a parallel machine based on commodity processing nodes (PN) and high-speed interconnection networks (IN). Some of the research topics investigated in this project are: architecture support for both message passing and shared memory models, NUMA/CC-NUMA/COMA shared memory architectures, SW-DSM, light-weight communication protocols, design and implementation of high speed low latency INs, fast and scalable collective operations implemented in software and with hardware support, and operating systems for CC-NUMA multiprocessors. Several prototypes are being constructed in order to demonstrate the key ideas of the project: i) efficient implementation in network of workstations of shared memory model through SW DSM, and message passing through user-level light-weight protocols; ii) design and implementation of a low-cost high performance interconnection network with support for shared memory model and message passing; iii) low-cost HPC computing system implemented with commodity SHV components. The project also includes the development of several high performance computing applications. http://www.lsi.usp.br/hpcac/spade2.html



University of Utah
Booth R205

The exhibit will showcase research in HPC and visualization at the University of Utah. A number of research centers or groups are involved, including:

http://www.chpc.utah.edu
http://www.chpc.utah.edu/sc97/



University of Virginia
Booth R206

The Legion Project at the University of Virginia is building metacomputing software to join thousands of machines, millions of users, and billions of objects. This software will join together diverse administrative domains, including NSF PACI participants (NCSA, NPACI), Universities (Caltech, UT Austin, UVa, etc.), DoE National Labs (Sandia, Los Alamos, and Livermore). Legion is also being deployed as part of the DoD Modernization project. We will demonstrate Legion version 1.0, running scientific computations on a metacomputer composed of systems from the above-mentioned sites. Our demonstrations will include distributed interactive applications, Web browsers for the Legion object space, and computationally intensive applications.



University of Wisconsin
Booth R313

Paradyn is a tool for measuring and analyzing the performance of parallel and distributed programs. Paradyn can measure large, long-running programs and provides facilities for helping to automatically find performance problems in parallel programs. Paradyn operates on executable files by dynamically inserting measurement code while the program is running. Paradyn can measure programs running on a variety of operating systems and platforms, or heterogeneous combinations of these systems. Paradyn can handle PVM and SP2 MPL and MPI. We'll exhibit Paradyn with a variety of applications and platforms, including:






    

Previous slide Next slide Back to the first slide