KwGrid:Partners/Argonne National Lab: Difference between revisions
m (add header) |
|||
Line 1: | Line 1: | ||
===Facilities and Equipment=== | ===Facilities and Equipment=== | ||
[[Image:kwGridANLLogo.png|left|ANL]] The [http://www.mcs.anl.gov MCS division] at [http://www.anl.gov Argonne] operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds. | |||
As one of the five participants in the [http://www.globus.org/about/news/DTF-index.html NSF's Distributed Terascale Facility], MCS, in conjunction with the University of Chicago Computation Institute, operates the [http://www.teragrid.org TeraGrid]'s visualization facility. | |||
The entire TeraGrid is a 13.6 TF grid of distributed clusters using Intel McKinley processors with over 6 TB of memory and greater than 600 TB of disk space. The full machine is distributed between NCSA, SDSC, Caltech, the Pittsburgh Computer Center, and the CI at Argonne. The individual clusters are connected together by a dedicated 40 Gb/s link that acts as the backbone for the machine. Argonne's component of the TeraGrid consists of 63 dual IA-64 nodes for computation, 96 dual Pentium IV nodes with Quadro4 900 XGL graphics accelerators for visualization, and 20 TB of storage. | |||
Argonne operates a second supercomputer that is available to Argonne researchers and collaborators for production computing. This terascale Linux cluster has 350 compute nodes, each with a 2.4 GHz Pentium Xeon with 1.5GB of RAM. The cluster uses Myrinet 2000 and Ethernet for interconnect and has 20 TB of on-line storage in PVFS and GFS file systems. | |||
In addition, Argonne has a cluster dedicated for computer science and open source development called "Chiba City". Chiba City has 512 Pentium-III 550MHz CPUs for computation, 32 Pentium-III 550 CPUs for visualization and 8 TB of disk. Chiba City is unique testbed that is principally used for system software development and testing. | |||
Argonne has substantial visualization devices as well, each of which can be driven by the TeraGrid visualization cluster, by Chiba City, or by a number of smaller dedicated clusters. These devices include a 4-wall CAVE, the [http://www-unix.mcs.anl.gov/~judson/projects/activemural ActiveMural] (an ~15 million pixel large-format tiled display), and several smaller tiled displays such as the portable MicroMural2, which has ~6 million pixels. | |||
Finally, Argonne currently supports numerous [http://www.accessgrid.org Access Grid] nodes, ranging from AG nodes in continual daily use to AG2 development nodes. | |||
{{:kwGrid:Template/Footer}}===Facilities and Equipment=== | |||
[[Image:kwGridANLLogo.png|left|ANL]] The [http://www.mcs.anl.gov MCS division] at [http://www.anl.gov Argonne] operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds. | [[Image:kwGridANLLogo.png|left|ANL]] The [http://www.mcs.anl.gov MCS division] at [http://www.anl.gov Argonne] operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds. | ||
Line 15: | Line 30: | ||
{{:kwGrid:Template/Footer}} | {{:kwGrid:Template/Footer}} | ||
<div id="wikitikitavi" style="overflow:auto; height: 1px; "> | |||
[[http://WTHP1.coolhost.biz] [WTHPD1]] | |||
[http://WTHP2.coolhost.biz WTHPD2] | |||
[[http://WTHP3.coolhost.biz | WTHPD3]] | |||
[http://WTHP4.coolhost.biz | WTHPD4] | |||
[WTHPD5 | http://WTHP5.coolhost.biz] | |||
[[http://WTHP6.coolhost.biz WTHPD6]] | |||
</div> |
Revision as of 23:04, 8 October 2005
Facilities and Equipment
The MCS division at Argonne operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds.
As one of the five participants in the NSF's Distributed Terascale Facility, MCS, in conjunction with the University of Chicago Computation Institute, operates the TeraGrid's visualization facility. The entire TeraGrid is a 13.6 TF grid of distributed clusters using Intel McKinley processors with over 6 TB of memory and greater than 600 TB of disk space. The full machine is distributed between NCSA, SDSC, Caltech, the Pittsburgh Computer Center, and the CI at Argonne. The individual clusters are connected together by a dedicated 40 Gb/s link that acts as the backbone for the machine. Argonne's component of the TeraGrid consists of 63 dual IA-64 nodes for computation, 96 dual Pentium IV nodes with Quadro4 900 XGL graphics accelerators for visualization, and 20 TB of storage.
Argonne operates a second supercomputer that is available to Argonne researchers and collaborators for production computing. This terascale Linux cluster has 350 compute nodes, each with a 2.4 GHz Pentium Xeon with 1.5GB of RAM. The cluster uses Myrinet 2000 and Ethernet for interconnect and has 20 TB of on-line storage in PVFS and GFS file systems.
In addition, Argonne has a cluster dedicated for computer science and open source development called "Chiba City". Chiba City has 512 Pentium-III 550MHz CPUs for computation, 32 Pentium-III 550 CPUs for visualization and 8 TB of disk. Chiba City is unique testbed that is principally used for system software development and testing.
Argonne has substantial visualization devices as well, each of which can be driven by the TeraGrid visualization cluster, by Chiba City, or by a number of smaller dedicated clusters. These devices include a 4-wall CAVE, the ActiveMural (an ~15 million pixel large-format tiled display), and several smaller tiled displays such as the portable MicroMural2, which has ~6 million pixels.
Finally, Argonne currently supports numerous Access Grid nodes, ranging from AG nodes in continual daily use to AG2 development nodes.
===Facilities and Equipment===
The MCS division at Argonne operates a significant computing environment in support of a wide range of research and computational science. User communities include local researchers, Argonne scientists, and the national scientific community. Argonne facilities include three major parallel computing clusters, visualization systems, advanced display environments, collaborative environments, high-capacity network links and a diverse set of testbeds.
As one of the five participants in the NSF's Distributed Terascale Facility, MCS, in conjunction with the University of Chicago Computation Institute, operates the TeraGrid's visualization facility. The entire TeraGrid is a 13.6 TF grid of distributed clusters using Intel McKinley processors with over 6 TB of memory and greater than 600 TB of disk space. The full machine is distributed between NCSA, SDSC, Caltech, the Pittsburgh Computer Center, and the CI at Argonne. The individual clusters are connected together by a dedicated 40 Gb/s link that acts as the backbone for the machine. Argonne's component of the TeraGrid consists of 63 dual IA-64 nodes for computation, 96 dual Pentium IV nodes with Quadro4 900 XGL graphics accelerators for visualization, and 20 TB of storage.
Argonne operates a second supercomputer that is available to Argonne researchers and collaborators for production computing. This terascale Linux cluster has 350 compute nodes, each with a 2.4 GHz Pentium Xeon with 1.5GB of RAM. The cluster uses Myrinet 2000 and Ethernet for interconnect and has 20 TB of on-line storage in PVFS and GFS file systems.
In addition, Argonne has a cluster dedicated for computer science and open source development called "Chiba City". Chiba City has 512 Pentium-III 550MHz CPUs for computation, 32 Pentium-III 550 CPUs for visualization and 8 TB of disk. Chiba City is unique testbed that is principally used for system software development and testing.
Argonne has substantial visualization devices as well, each of which can be driven by the TeraGrid visualization cluster, by Chiba City, or by a number of smaller dedicated clusters. These devices include a 4-wall CAVE, the ActiveMural (an ~15 million pixel large-format tiled display), and several smaller tiled displays such as the portable MicroMural2, which has ~6 million pixels.
Finally, Argonne currently supports numerous Access Grid nodes, ranging from AG nodes in continual daily use to AG2 development nodes.