Science Technology Platforms


Emerald High Performance Computing Case Study


The focus of this case study is a UK High Performance Computing (HPC) facility called Emerald. Funded by the EPSRC, and launched in spring 2012, Emerald is a large Graphics Processing Unit (GPU)-based supercomputer which facilitates computationally-intensive experiments. As a collaborative venture between the Universities of Bristol, Oxford, Southampton and UCL - together forming the Centre for Innovation (CfI), the cluster is of a significantly higher specification than any of the institutions would have been able to invest in individually. Emerald has driven cross-disciplinary academic, SME and industry engagement, and the partner institutions are actively working to train researchers and maximise utilisation of the resource. Continued investment will be necessary to sustain and develop Emerald in the future. 



In March 2012, an EPSRC-funded High Performance Computing (HPC) facility called Emerald was launched. Emerald is a supercomputer built with Graphics Processing Unit (GPU) architecture, which at the time of launch was amongst the largest GPU-based systems in Europe, and remains the largest such system in the UK. It was launched jointly by the Universities of Bristol, Oxford, Southampton and UCL, which together form a consortium called the Centre for Innovation (CfI) and the system is hosted and operated by the Science & Technology Facilities Council (STFC) in a strategic partnership with CfI. The major aim of the CfI is to support the co-development and sharing of e-infrastructure capabilities (including hardware, software, people and skills) between the partners, and to develop links with other academic and industrial organisations. 

Emerald supports all of these objectives and has greatly benefited research output, industry collaboration and the training and development of users.

Research highlights:

High Performance Computing

By providing access to significant computational power, the Emerald cluster has

enabled researchers to perform theoretical experiments in much shorter timescales. The outputs of these model investigations can be used to guide physical experiments.

Important research highlights include:

  • UCL researchers are using the resource to simulate and predict the chemical processes that take place at the surfaces of metal and other materials.
  • Scientists at Bristol are investigating how mutations of a key enzyme in H1N1 (the 'Swine influenza' virus) lead to the development of resistance to current antiviral flu treatments.
  • Researchers at UCL are working with GPU specialists at Oxford to optimise the performance of a tsunami simulation code.
  • UCL scientists are simulating the effect of gene mutations linked to the spread of cancer. This can aid the development of more robust and effective cancer treatments.
  • Scientists at Imperial College London have been able to achieve unprecedented levels of accuracy in computational fluid dynamics, specifically relating to Unmanned Aerial Vehicles, allowing engineers to understand complex flow patterns and thus perform aerodynamic design, without flying an aircraft or even starting up a wind tunnel.

Industrial collaborations:

The CfI has actively engaged with industry through workshops at STFC and UCL, to publicise and promote the potential of GPU-based computing technology to industrial research applications. CfI has directly engaged with SMEs including NAG Ltd., Zenotech and Cresset Biomolecular Discovery Ltd. When allocating computing resource for Emerald, priority is given to collaborative work, especially between academic partners and industry.

Improved awareness, training and skills:

The CfI institutions are working hard to drive user engagement and facilitate training. NVIDIA, who manufactured Emerald's processors, offer training in CUDA, a programming model they developed to harnesses the power of GPU cores. This training is available across CfI partner institutions and a summer school is run every year at Oxford. Researchers are increasingly learning to code and are collaborating with internal and external software development teams to create and optimise algorithms that emulate real-life behaviours in a virtual world. Developing code that operates efficiently on multi-core systems is a challenge, and researchers frequently request Emerald resource to 'pressure test' their code at scale.

Oxford also holds regular lunchtime events to promote Emerald to its academic community. Total utilisation of the resource peaked in February 2014 at 85%, with individual institutions making greater use of their allocated portions: UCL's usage, for instance, grew from around 12% in mid-2013 to approximately 22% in Q1 2014.

Efficiency outcomes:

The Emerald facility is a tier 2 (regional) scaled machine, and is of a significantly higher specification than any of the partner institutions would have been able to invest in individually. Operational costs are shared amongst the CfI members, and the resource has been administered by a single team. This pooling avoids the need for each institution to invest separately in duplicate resource, and facilitates cross-fertilisation of ideas, knowledge and experience between the partners. Monthly usage metrics are circulated to all CfI members, encouraging institutions to fully utilise their allocated resources. Moreover, key case studies are shared among the group, highlighting areas where benefits are being realised. Had this facility been implemented on a smaller scale or been more inward-looking, some institutions would not have developed the expertise required to leverage the technology fully and knowledge would have remained within institutional silos.

The future of Emerald:

Capital funding from RCUK allowed the Emerald facility to be set up. Without this major investment in GPU architecture, the technology would have remained small-scale and unproven in a large, industrial environment.

On-going RCUK funding is critical to sustain and develop Emerald. One future challenge is the development of a set of common tools to assist the partner institutions in managing the resource. Currently, there is no funding in place beyond summer 2015, when the current system will reach end-of-life and be decommissioned.