Science Overview and Research
Modelling the Universe: From Atomic to Large Scale Structures (Overview)
The main science of the Miracle Consortium is the connection between atomic to large scale physics and observable astrophysical quantities. For example two of the recent research highlights have been the development of the successful gas and dust radiative transfer code MOCASSIN (Ercolano et al., MNRAS, 340, 2003) and the creation of the largest ever water line list by Barber et al. (MNRAS, 368, 2006). The consortium focuses on the modeling astronomical environments from the upper levels of the Earth’s atmosphere, through planets, the stars and gas clouds of the galaxy, and the intra-cluster medium of galaxy clusters, to the fundamental questions of the cosmos. It collaborates closely with atomic and molecular physicists and chemists, who provide the computational data needed to model these processes. Recent highlights of research in this field include a publication in Nature “An unexpected cooling effect in Saturn’s upper atmosphere” by Smith et al. (Nature, 445, 399 (2007)). These findings have been made possible with the Miracle facilities.
The research program of the Miracle consortium is in line with four of the nine key questions given in the Science and Technology Facilities Council roadmap: “What is the universe made of and how does it evolve?”, “How do galaxies, stars and planets form and evolve?”, “How does the Sun affect the Earth?”, and “Are we alone in the Universe?”.
Current requirements are driven by the variety of problems presently under investigation, including; the optical properties of small molecules; 3D radiative (magneto)-hydro-dynamical simulations of stellar formation, stellar structure and evolution; astrochemical simulations in 3D physical simulations, especially the polymerization of astrophysical dust particles; catalytic and optical properties of astronomical dust candidates; 3D spectral line and continuum radiative transfer; 3D simulations of planetary atmospheres and extra-solar planetary atmospheres and full hydro-dynamical simulation of the intra-cluster medium.
Projected requirements are based on a variety of future problems involving a general move to 3D and the coupling of hydrodynamics with radiative and chemical processes, with an increasing need for atomic and molecular data in both the gas and solid phase. Future problems include the following; astro-biological modeling; detailed galaxy and star formation models; detailed stellar evolution codes – a true model HR diagram; detailed galaxy evolution codes; the evolution of stars and the ISM at very high redshift; the feedback effect of molecules on the early universe; putting microphysics into Cosmic web simulations; simulating the gas in galaxy clusters and producing synthetic catalogues e.g. SDSS to test cosmological models and provide mock catalogs for future surveys, such as DES, with synthetic spectra of galaxies.
Please note that the research presented here originates from the 2007 Miracle proposal. This page will be updated with current research in due course.
- Atmospheric Physics Modelling
- Extra-solar Planet Formation and Evolution
- Calculating Molecular Line Lists
- Hot Cores at High Redshift
- Modelling transient clumpiness in molecular clouds
- HII Regions and Star Formation
- Modelling the outputs from the JCMT Spectral Legacy Survey
- Dusty Circum-stellar Environments
- Calculation of dust extinction coefficients and dust evolution
- Pumping of OH and H2O masers
- Modelling the Interacting winds in STARBURST Regions
- Probing the structure of dark matter halos
- Galactic Chemical Enrichment
- Simulations of Groups and Clusters with FLASH
- Galaxy Cluster and Weak Lensing Catalogues
Atmospheric Physics Modelling (UCL: Achilleos, Aylward, Guio, Miller, Spain, Tinetti; Imperial: Mueller-Wodarg)
The Atmospheric Physics Laboratory, at UCL, together with colleagues at Imperial College currently run the world’s most extensive and productive suite of planetary upper atmosphere models. Originally built around modelling the Earth’s thermosphere and ionosphere, the work has been extended to the neutral upper atmospheres and ionospheres of Mars, Jupiter, Saturn, Titan (Triton) and extra-solar Gas Giants. Underlying recent developments has been a trend to make the codes modular, rather than planet-specific, and to couple different atmospheric and plasmaspheric regions. All the atmospheric models are based on solving the Navier-Stokes equations on a latitude longitude pressure-level grid (pressure rather than height, because it simplifies the equations). This means solving non-linear differential equations of energy, momentum and continuity for each cell of the grid at each time step. Typically the models have 200,000 cells. The "basic" model is a neutral atmosphere, which we solve for neutral winds, temperature and composition. Associated with this is nearly always also an ionosphere, which can be embedded on the same grid (e.g. Mars) or in a separate plasma-tube based coordinate system (Earth, Saturn). The time steps vary from a few seconds (e.g. Jupiter, extra-solar planets) to a minute (earth) or more (Titan) depending on the key chemical reactions on the body in question.
The terrestrial model has given rise to several versions. The two most important of these are CTIP (Coupled Thermosphere Ionosphere and Plasma-sphere model) and CMAT2 (Coupled Middle Atmosphere and Thermosphere Model). CTIP has a lower boundary at 80km altitude but includes fully coupled global electrodynamics for the ionosphere, enabling it to simulate the complex behaviour in the equatorial electrojet, while its auroral particle and electric field drivers are continually being developed to give a high spatial and temporal resolution for detailed Space Weather studies. CMAT2 is a development of an earlier CMAT, which took the lower boundary down into the middle atmosphere, increased the vertical resolution and added middle atmosphere minor chemistry. CMAT2 takes these ideas further, and, as it incorporates most of what is in CTIP, will replace CTIP in the future for many of our studies. CMAT2 has been rewritten from scratch using F95 and CVS version control. In addition CMAT2 has been thoroughly modularised.
CMAT2 is a variable resolution (in height latitude and longitude) model, which can have a lower boundary at any height from 15 km altitude upwards. It has a variety of lower boundary models (from fixed, through the simple parameterised MSIS to the empirical NCEP datasets) and can be run with self-consistent or parameterised ionospheres. It can be run with or without minor constituent chemistry (ozone, Odd-H, atomic nitrogen etc), with or without gravity wave parameterisations and with or without tidal inputs. In its largest configurations it requires over 2Gbytes of memory. Since a key 30% of this code is currently serial, the model still runs on a single processor. Individual processor speed is thus vital, as it can take 4-5 CPU-hours to run a single one model day and we usually need 30 model day runs to converge. The model makes use of the availability of multiple processors for parameter searching, in which individual runs are given a variety of inputs.
Converting the terrestrial models to simulate Mars would seem to be the simplest (on paper) since it is mainly a thermosphere with a chemical-balance embedded ionosphere. But the challenges are considerable, probably because of the non-thermodynamic equilibrium nature of Carbon Dioxide heating and cooling. It is also particularly dependent on its lower boundary and a lot of work has been put in to coupling the model with the Mars Climate Database 60km altitude surface (from Oxford University). Mars’ atmosphere displays a lot of small-scale motions and so getting usable tidal and gravity wave dynamics going is another important development we are part way through. These developments will mean improving the time resolution and increasing the model size, with a concomitant effect on the machine resources and/or time required. We are also embarking on a programme for developing the topside interaction with the solar wind, which controls the exospheric structure and the topside ionospheric profile. This will mean adding code to the model - the code for generating the particle fluxes and dynamics of the interaction region might well double the size of the current model.
Giant planets are modelled in two ways:
- The Jovian Ionosphere Model (JIM) was developed, in the 1990s, from the CTIP code. It is extremely comprehensive and has high time resolution (because the key reactions require 4 second time resolution).
- Simpler versions of the codes have now been developed, based on a modular thermospheric code originally written for Titan as a “pure” thermosphere.
JIM solves the chemical equations of the atmosphere in real time and takes several CPU-days to run one Jovian revolution. Thus it is used on an as-required basis to simulate and interpret observations of the Jovian auroral/polar ionosphere, to determine conductivities and calculate overall energy inputs into the upper atmosphere.
The original Titan code was developed to be used as a planning tool for the Cassini mission flybys of Titan. It enabled the planning team to estimate how close they could fly to Titan and to plan which measurements to make. Cassini/Huygens’ exploration of Titan’s upper atmosphere has posed similarly challenging problems. In-situ observations of ions and neutral gases above 950 km altitude (Yelle et al., Icarus, 182, 567 (2006); Mueller-Wodarg et al., J. Geophys. Res, 111, A12315 (2006)) show that Titan’s upper atmosphere is highly variable from one Titan flyby to another. This variability is partly due to horizontal structures in the atmosphere and partly due to changes with time, which result from solar and magnetospheric forcing of Titan’s upper atmosphere. The Titan Thermosphere General Circulation Model (Mueller-Wodarg et al., J. Geophys. Res., 105, 20833 (2000), Mueller-Wodarg et al., J. Geophys. Res, 108, A121453 (2000) calculated for the first time the global structure of Titan’s thermosphere above 600 km altitude, assuming solar heating as the main energy source. Comparisons between these simulations and the observations by the Ion Neutral Mass Spectrometer (INMS) of Cassini have shown that Titan’s thermosphere cannot be understood assuming solar heating alone. Strong latitudinal gradients in densities are found, which imply the presence of super-rotation in Titan’s upper atmosphere (Mueller-Wodarg et al., J. Geophys. Res, 111, A12315 (2006)). At Imperial College we have concentrated on determining the horizontal structure of Titan’s thermosphere from the INMS observations, and the next step will be to understand the dynamics that accompany these horizontal density structures. A key step towards understanding the global dynamics on Titan is to assess the momentum sources, such as super-rotation from below, upward propagating waves and magnetospheric forcing. We propose to extend the existing Titan GCM from 600 km down to 300 km, including the key processes of radiative forcing by aerosols and hydrocarbons as well as radiative transfer calculations to calculate cooling rates due to HCN rotational and hydrocarbons vibrational bands. These calculations are computationally challenging and require a high performance facility, as proposed here.
For Saturn, the Saturn Thermosphere Ionosphere Model (STIM) has been developed in conjunction with the Universities of Boston and Arizona. Work so far has focused on understanding the global structure of the neutral upper atmosphere (thermosphere), the energy sources and horizontal structures in temperature and composition (Moore et al., Icarus, 172, 503 (2004); Smith et al, Ann. Geophys., 23, 2465 (2005); Mueller-Wodarg et al., Icarus, 180, 147 (2006)). This work has produced the ranges of dynamics that are to be expected in Saturn’s thermosphere. To-date, no observational evidence could be used to validate the model globally. The next step is to extend these calculations to the global ionosphere of Saturn, for which Cassini Radio Occultation observations are available (Nagay et al., J. Geophys. Res., 111, A06310 (2006)). This poses a serious challenge to computing power: the explicit time integration method used in our codes requires time steps of one second or below, with typical calculations of the coupled model having to be carried out for no less than 10 Saturn rotations in order to reach equilibrium conditions. Our aim is to be able to reproduce the vertical distribution of electrons measured in Saturn’s ionosphere by Cassini and previously by Voyager and Pioneer missions, and for the first time to be able to understand their strong variability, which photochemical models have so far failed to reproduce. The key to solving this problem lies in including self-consistently the full thermosphere-ionosphere coupling processes as well as effects of electric fields in the polar regions, which are induced by solar wind-magnetosphere interaction, which we propose to carry out using the new high performance computing facility. The complexity of this coupled code makes the use of this facility paramount.
A simplified 2-d and 1.5-d version has recently been used to examine the contributions of Joule heating and ion drag to the thermospheric body temperature (Nature, 445, 399 (2007)). Another version of this model - called KIM (Kronian Ionosphere Model) - uses a plasma tube based thermospheric code (derived from the CTIP plasma tubes) to extend the ionosphere out into the Saturnian plasmasphere - this is being used to develop the chemistry of the interaction with the Saturnian rings.
A fundamentally important aspect of the environments of magnetized planets is that they consist of strongly coupled subcomponents. In particular, the ionospheres and magnetospheres of rapidly-rotating planets (e.g. Jupiter and Saturn) are continually exchanging angular momentum through the mega-Amp systems of electric currents which connect them. Modelling these coupled systems in a self-consistent manner represents the next important stage required to improve our understanding of the underlying physics. To date, models of the magnetosphere-ionosphere coupling have usually greatly simplified at least one ‘end’ of the ‘interface’ – for example, static ionospheres embedded in time-dependent magnetospheric models, or simplified magnetospheric structures forcing the flows in a model ionosphere. To take the next crucial step and model these systems in a more realistic and self-consistent manner, we intend to develop distributed models – systems of interacting software components that interact with each other. The exchange of information between the interacting models thus mirrors the actual process of M-I coupling in nature. This architecture is not only more reflective of the system being modelled; it also allows more efficient and independent development of each component (e.g. one ionosphere model, one magnetosphere model and a suitable interface).
As part of this project, we plan to make use of PICSIM, a C++ Particle-In-Cell framework for kinetic two-and-a-half (2R3V) and full three-dimensional (3R3V) electrostatic plasma simulations. PICSIM is highly scalable and modular. It allows calculation of the dynamics at different timescales through three main different classes of simulator:
1. One or several electron particle distributions in a constant ion background.
2. One or several ion species in a Boltzmann or near-Boltzmann mass-less electron gas.
3. Both one and several electron and ion dynamics.
All simulators have the same modular capability and can be configured through a simple description file to allow for a variety of physical conditions and inputs.
As part of its development of inputs for the GCMs, a Monte Carlo code for simulating the effects of auroral precipitation was developed. This has also been generalised to include the effects of all types of particles - both ions and electrons - and also fluxes of photons. The background atmosphere into which this precipitation occurs can be varied to simulate different planets, and it includes a large data-base of interaction cross-sections of all possible atmospheric species.
The code is called DICE (Dissociation, Ionisation, Charge-exchange and Energisation) or RIDE (Recombination, Ionisation, Dissociation and Energisation), depending whether the inputs are photonic or particle. It is 1-d, although it does allow for recoil particles going back upwards. A 3-d version (looking, for example, at the horizontal spread of non-ionised species after collisions) is also under development. Any energy spectrum of particles can be used as the input at the top of an atmosphere. The vertical resolution is determined by the local physics and the accuracy needed. Typically 100s of 1000s of input particles are used in each simulation - and the full spectrum of secondaries followed down in energy until they join the local thermal distribution. The calculation gives ionisation rates, the final distribution (in composition and energy) of the different atomic and molecular species, plus the heat generated over the interaction altitude range.
Extra-solar Planet formation and evolution (UH: Gledhill, Jones, Lucas; UCL: Aylward, Barlow, Miller, Tinetti, Yates)
The discovery of extra-solar planets has provided a serious challenge for the theories of planet formation and evolution. None of the new planetary systems discovered so far resemble our own Solar System. The Miracle Consortium is therefore engaged in a multi-faceted program to understand these objects. This work ties in with the involvement in extra-solar planet space missions brought to UCL particularly through the appointment of Giovana Tinetti as one of only three PPARC Aurora Fellows in the country.
The University of Hertfordshire (Jones) is using high-precision radial velocity
measurements made with the Anglo-Australian Telescope to detect and
characterise new planets and planetary systems. Before we can compare our
observations with theoretical predictions of both the number and properties of
the extra-solar planets, however, we require a detailed understanding of the
selection effects involved in our observations. To do this, we have simulated
our radial velocity data for each of our 250 targets; for each star in our
catalogue there are more than 15,000 simulations to analyse. The field of
extra-solar planets is very fast moving and we are acquiring new data on a
regular basis, so the simulations must be run in parallel to allow the timely
completion of this project. Simulations of this kind have never been carried
out by any extra-solar planet search team, making this a world-leading project.
Following the successful first science run of PLANETPOL on the WHT recently, we are developing new Monte Carlo (MC) light scattering codes to model the polarisation of reflected light from extra-solar planets e.g. from polarisation light from ‘hot Jupiter’ planets such as 51 Peg. Combining this with Msin(i) measurements from radial velocity data provides the orbital inclination of the planet and hence the planet’s mass. Detailed modelling will determine the nature of the scattering particles and the geometric albedo as a function of wavelength; this will give the size of the planet.
Currently, the Atmospheric Physics Laboratory has a strong programme of work on extra-solar planet atmospheres. We have had one PhD produced, and two papers on the modelling of extra-solar gas giants (done by using a version of our Jovian models placed nearer and nearer the "sun" – EXOTIM). As more and more detailed observations are made of extra-solar planets (as for example of HD209458b, a transiting Jovian-like planet) there is more need for detailed modelling to inform the debate about what the observations mean for the structures and dynamics of these bodies. The high temperatures of these planetary atmospheres mean that the simulations usually require high time resolution and detailed chemical schemes, so these are becoming big users of computer CPU-time. This will expand with the advent into the group of a new Aurora Fellow and a PhD student who will be developing codes for extra-solar terrestrial-sized planets. They will be concerned with modelling typical atmospheres and their emission profiles in particular. We will work on one and three-dimensional models of the atmospheric dynamics, including the contribution of convection, transport, cloud microphysics, radiative transfer etc. and line by line radiative transfer codes, to simulate the spectrum of a planetary atmosphere at multiple wavelengths.
The spectra of cool stars, both dwarfs and giants, as well as those of smaller, sub-stellar objects, are completely dominated by molecular absorptions. Without a comprehensive understanding of these spectra it is impossible to determine even the most basis parameters of these objects such as temperature or surface gravity. Molecules with more than two atoms can have huge numbers of individual vibration-rotation transitions. These lines have a blanketing effect, at least for some wavelength regions, on the stellar spectrum and are too numerous to be measured individually in a laboratory experiment. The group at UCL has been performing a systematic series of calculation on the line opacity of key molecules.
For example, the BT2 water linelist (Barber et al 2006) is a major advance in both accuracy and completeness. It has allowed water spectra to be modelled in a range of astrophysical objects including cool stars (eg Jones et al 2005), detected for the first time the nova-like object V838 Mon (Bannerjee et al 2005) and extrasolar planets (Tinetti et al, to be published), and leading to a major observation handle on cometary coma (Dello Russo et al 2004, 2005, Barber et al 2007). The improved HNC/HNC linelist has led to the identification of HNC in the spectra of carbon stars (Harris et al 2006) potentially an important thermometer and is being applied to the determination of isotopic abundances.
We plan the following opacity calculations using the codes DVR3D, ROTLEV and XY3:
1. C3 molecule: important for the spectra of carbon stars;
2. HDO: important for a modelling a possible deuterium test to see if ultra-low mass stars have reached the deuterium burning phase or not;
3. Ammonia: important for models of brown dwarfs, particularly the yet to be detected Y-dwarfs or ammonia stars;
4. Acetylene: important for carbon star models. The acetylene content of the earth’s atmosphere is also predicted to rise very significantly over the next half century.
We have developed a two-phase, time-dependent model to study chemical reactions and molecular abundance for massive stars in the Milky Way. This project will extend this model for nearby and high redshift galaxies. In the first phase, the model deals with the collapse of an interstellar cloud. Parameters may be varied from run to run, giving a range of physical conditions, such as starting temperatures, molecular abundances, and density, size and visual extinction of the cloud. Another variable is the cosmic ionisation rate, which is now known to be different (by at least an order of magnitude) in dense and diffuse clouds. During this phase, molecules are allowed to freeze out onto interstellar grains. The second phase commences when an arbitrary (user-supplied) critical density is reached, corresponding to the birth of the central star. At this point, the code computes the evaporation of molecules from the grains, and the subsequent chemistry that occurs in the gas. Model outputs include time-dependent molecular abundances, and the properties of the stellar cores (age, size, mass etc.).
Our model is based on a well-established picture of massive star formation in the Milky Way Galaxy. Massive stars are found to be associated with molecular-rich dense, warm condensations of gas and dust, with abundances of species that are quite distinct from those found in quiescent interstellar molecular clouds. At high redshifts, the general picture remains the same but there are some very significant specific differences that need to be explored both computationally and observationally. At high redshift, the metallicity (i.e. the abundance of elements such as carbon, oxygen, nitrogen, sulphur, and iron, arising from the first generation of stars) is very greatly reduced compared to that in the Milky Way Galaxy, and the relative abundances (e.g. carbon relative to oxygen) are also different, depending on the precise nuclear processes occurring in the first generation of stars. The dust plays a fundamental role, and the size of the dust grains and their abundance relative to the gas is also of crucial importance. The dilution factor, by which the matter ejected from the first generation of stars is mixed with the ambient interstellar medium from which the second generation of stars was formed, is currently completely unknown. We plan to explore the significance of all these and other factors through the modelling and detections of molecular radiation.
Over the next few years, this project therefore aims to explore the sensitivity of these predictions to variations in the chemical and physical nature of the gas and dust, which we know is very different from our own Galaxy. We will also explore the timescales of the process of star formation in high red shift galaxies, again likely to be very different from those of our own Galaxy. The deliverables of this project are particularly important in light of ALMA and Herschel.
This project involves the use of time-dependent chemical models and non-LTE molecular line radiative transfer codes to simulate the physical and chemical structure of the clumps seen ahead of Herbig Haro objects. This work enables us to infer the clumpiness of molecular clouds from single-dish and, when possible, interferometric maps of a large sample of HH objects. From this, we obtain a statistically meaningful measure of clumpiness in molecular clouds, in terms of filing factors and mass spectrum. In order to test our models, so far we have fitted observations of specific objects or of a small sample of HH objects, by picking individual chemical models runs out of a large (>300) grid of chemical models (parameter search grids). We are now developing the following:
v Coupling the computed grid with a radiative transfer model (SMMOL). We established the coupling techniques in previous projects (see publication list below). Each chemical model calculates the fractional abundance as a function of depth and time of over 200 species and, for a set of specific input parameters produce one output per model: the latter is then split in NxM outputs where N is the number of times steps (usually ~500) and M is the number of molecules for which the radiative transfer is wanted (~6). This will require 500x6x300 models to be run.
v In order to account for the randomness and transient nature of clumpiness, a new chemical model has been developed where the chemistry in each clump responds to the dynamical state of the gas and to the gas-dust interaction. As the physical parameters and time dependent variability of these objects are relatively unknown, numerous individual runs will be required varying radiation intensity, density, and time when illumination commences. A reasonable grid will comprise ~300 models. These in turn will be fed to SMMOL as described above.
Two codes, UCL_PDR and MOCASSIN, will be used to model giant HII regions and star formation complexes in our own and other galaxies, with the aim of interpreting Spitzer and particularly Herschel data to derive reliable values for their total masses and luminosities, their elemental abundances, and the mass distributions of the stars that power them. Elemental C/O ratios within galaxies are a particularly powerful tracer of their evolution but carbon abundance determinations have usually required observations of ultraviolet carbon lines, which are inaccessible for most giant HII regions and star formation complexes, due to strong dust extinction. The far-infrared and sub-millimetre region includes important carbon fine structure lines, [CII] at 158mm and [CI] at 369 and 609mm, which have not hitherto been used as carbon abundance diagnostics. They are emitted strongly by photo-dissociation regions (PDRAs), as are the fine structure lines of [OI] at 63 and 146mm. ESA’s Herschel Space Observatory, using its PACS, SPIRE and HIFI spectrometers, will observe these lines towards many galactic and extragalactic sources. We intend to use observed line flux ratios to derive C/O ratios for a wide variety of sources.
To do the above science, we are developing a three dimensional Monte Carlo code that merges MOCASSIN (a 3D Monte Carlo photo-ionization and dust radiative transfer code, developed as part of the current Miracle project) and UCL_PDR, a code that treats the many atomic and molecular processes that occur in PDRs. The merged code will be capable of treating the ionized, PDR and dust components of star formation complexes having multiple exciting sources. It will be used (a) for the detailed modelling of individual complexes, such as M42/Orion-KL and M82, and (b) to create an exhaustive grid of models that can be used with a parameter-search engine to match the observed spectra of star-forming galaxies.
Modelling the outputs from the JCMT Spectral Legacy Survey: (UCL: Rawlings, Viti, Yates; Manchester Fuller; UH: Thompson, Chysostomou)
Stars form in the densest, coldest, most quiescent regions of molecular clouds. Molecules provide the only probes that can reveal the dynamics, physics, chemistry, and evolution of these regions, but our understanding of the molecular inventory of sources and how this is related to their physical state and evolution is rudimentary and incomplete. The Spectral Legacy Survey (SLS; PI Fuller) is one of seven surveys recently approved by the James Clerk Maxwell Telescope (JCMT) Board of Directors. Beginning in 2007, the SLS will produce a spectral imaging survey of the content and distribution of all the molecules detected in the 345 GHz atmospheric window (between 332 and 373 GHz) toward a sample of five sources. Our intended targets are a low-mass core (NGC 1333 IRAS 4), three high-mass cores spanning a range of star-forming environments and evolutionary states (W49, AFGL 2591, and IRAS 20126), and a photo-dissociation region (the Orion Bar). The SLS will use the unique spectral imaging capabilities of HARP-B/ACSIS (Heterodyne Array Receiver Programme B/Auto-Correlation Spectrometer and Imaging System) to study the molecular inventory and the physical structure of these objects, which span different evolutionary stages and physical environments and to probe their evolution during the star formation process. As its name suggests, the SLS will provide a lasting data legacy from the JCMT that is intended to benefit the entire astronomical community. As such, the entire data set (including calibrated spectral data cubes, maps of molecular emission, line identifications, and calculations of the gas temperature and column density) will be publicly available.
Yates (UCL) is coordinating the effort to produce the model data products listed below of chemistry and physical conditions in these sources. This work will also involve collaborators from QUB (Millar), RAL (White) and Cambridge (Richer). The UCL codes, SMMOL, Mocassin and UCL_PDR will be used to model the continuum and spectral line data. For the PDR region it is hoped that the merged Mocassin and UCL-PDR code can be used to model this region.
This survey will provide key insights into the role of Interstellar Chemistry in the star formation process, particularly the role of Deuterium chemistry; nitrogen chemistry; oxygen chemistry; complex organics; and hydrides. This will produce maps of the chemistry as a function of distance of the central source; insights into the chemistry of protostellar outflows; the composition of clouds between source and ourselves; the role of PDR chemistry and X-ray driven chemistry. It will for the sources involved produce maps of temperature and density as function of distance to the source, the Radiation field properties; source structure, including cavities; turbulence; magnetic fields structure; and the electron fraction. These are key data products if we are to understand the Star-formation evolutionary stage, the interstellar mass function IMF; the role of feedback on star formation and ISM evolution; and cluster or clump properties;
Dust is a component of the environments of stars, which is present throughout most of their life times, from their formation as proto-stellar objects, through the development of planetary systems, to the production of new dust in the outflows from evolved stars. A key tool for studying circumstellar environments is, therefore, the observation and modelling of scattered and emitted light from dust grains. This requires the use of sophisticated codes treating radiative transport in optically thick 3-dimensional environments, based on realistic non-spherical and non-homogeneous dust particle models.
Young Stellar Objects
Magnetic fields play a crucial role in regulating accretion onto YSOs and
in driving outflows and jets. One of the few techniques for determining the
magnetic field structure in the envelopes around YSOs is to look for the
effects of scattering and absorption by magnetically aligned dust grains, which
produce both linear and circular polarization. This requires RT models with a
full treatment of polarized light propagation in optically thick environments,
as well as the scattering and absorption of light by non-spherical, aligned
dust particles (see Lucas et al. 2004, MNRAS 352, 1347, Lucas et al. 2003,
JQSRT 79, 921). The incidence of high circular polarization is also of great
interest as a mechanism for inducing an intrinsic enantiomeric excess in
organic chiral material, which may subsequently be delivered to Earth, such as
detected in the Murchison meteorite (see Lucas et al. 2005, Origins of Life and
Evolution of Biospheres, Volume 35, Issue 1, pp.29-60). The Monte Carlo codes
are optimized and parallelized for runs on multi-processor systems and require
large numbers (>109) of photons to achieve the accuracy required
to constrain circular polarization observations.
The dust discs around main sequence stars (Vega-excess stars) are probably the material left over from planet formation. Hitherto few debris discs have been directly imaged; however we have employed imaging polarimetry to enable routine detection of these discs (see Hales et al., 2006, MNRAS, 365, 1348). The detailed determination of the distribution of material in debris discs is an important step to understanding planet formation, planetary system configurations and the formation of structures such as asteroid belts. In addition, the dust grains in these systems retain signatures of their incorporation into larger bodies, potentially providing clues to the particle coagulation processes that occur during planet building. Our Monte Carlo dust scattering codes must run over several decades of grain sizes and include discs with warps, flares and gaps to simulate the effects of planetary clearing and interaction. In addition, it is clear that dust particles in these systems are complex aggregates, having fluffy structures including voids, and probably volatile icy or organic coatings (similar to inter-planetary dust particles in our Solar System). To model scattering from such grains requires the use of specialist particle codes such as Discrete Dipole Approximation DDA and T-Matrix.
The latter stages of stellar evolution are characterised by huge mass-loss, returning stellar material to the ISM. We are using multi-wavelength imaging and polarimetric observations to study the circum-stellar dust shells and outflows to determine the mass-loss history of AGB stars, their chemical evolution and the formation of circum-stellar discs and collimated outflows (see Gledhill, 2005, MNRAS, 356, 883; Lowe & Gledhill, 2007, MNRAS, 374, 176). Currently each model simulation typically requires over 100 runs of our radiation transport code DART. So far we have only considered simple dust shell geometries and density distributions. The inclusion of optically thick circum-stellar discs, asymmetric outflows, and multiple illuminators (it is now thought that these systems are nearly all binary) will require a 10-fold increase in parameter space. Since these evolved stars are the principle dust factories in our Galaxy, we also plan to include dust grain growth and evolution, adding a temporal dimension to our models.
Calculation of dust extinction coefficients and dust evolution: (UH: Gledhill UH; UCL: Barlow Yates)
Realistic dust optical constants: In order to calculate the scattering and absorption properties for realistic astronomical dust particles, a key input to UV/optical/IR/mm dust radiation transport codes, proper consideration must be given to their composition, shape, orientation, mantle properties and size distribution. Hitherto nearly all calculations have used very rough approximations, such as spherical homogeneous particles, usually of silicate or amorphous carbon and with a limited range of sizes. There is now widespread evidence that dust grains form by the sticking and coagulation of smaller component grains, or by fragmentation of larger bodies and will therefore be non-spherical, porous or fluffy, and may have a layered composition, possibly with a volatile coating or mantle. There are now sophisticated modelling techniques available to treat such particles. The T-Matrix approach offers advantages in calculating the scattering and extinction properties for non-spherical particles, whereas codes representing grains as clusters of monomers (such as the Discrete Dipole Approximation DDA codes) are appropriate for fluffy particles. The Separation of Variables Method (SVM) has been adapted to treat multi-layered spheroids. We propose to perform extensive calculations of scattering matrices and cross-sections for a wide variety of grain types, shapes, sizes and compositions and over a broad wavelength range. These will form a library of look-up tables for dust radiation transport calculations or dust in protostellar regions, planetary systems, evolved stars, galaxies and AGN. 10,000 sets of scattering coefficients will be produced for use by the UH and UCL groups, and the wider Astrophysical community, where they can be accessed via the WWW.
Optical constants for crystalline dust particles: Constants for silicates and silicon carbide will be produced using codes that employ density wave theory to calculate the optical constants of crystalline materials. This is necessary because MIE theory does not accurately compute the optical constants for intermediate to strong spectral features (e.g. the 10um silicate feature) and we would like to use these results to explore the mm-wave and sub-mm region for new spectral features. Such features could be used to constrain dust properties in regions observed by ALMA
There are many situations in astrophysics, and indeed in all branches of science, where the solution to a set of equations depends on a large number of pieces of input information. An example is the computation of the populations of molecular and atomic energy levels under non-local-thermodynamic equilibrium (NLTE) conditions. In this example, a set of input data comprising typically hundreds of Einstein A-values and thousands of collisional rate coefficients is used to set up a pseudo-linear matrix equation, which is then solved by an efficient numerical method. The problem with this approach is that one obtains a numerical answer (the set of level populations in the NLTE system) but the detail of what leads to this solution is lost. For example, the investigator does not know whether the solution is largely determined by only a small subset of the input coefficients (the system has an underlying simplicity) or whether it is truly complex, and the great majority of the input data are important.
Gray has developed a method of restoring the detail to the solution of NLTE molecule/radiation systems. This method involves maintaining a log of the operations carried out on the matrix, and using this to trace back the form of coefficients at an advanced state of elimination towards those that formed the original, unprocessed, matrix. The method has been successfully applied to identifying the detailed pumping schemes of OH 18cm masers in circum-stellar envelopes (Gray, Howe and Lewis, 2005) and star-forming regions (Gray, 2007).
However, the method can, in principle, be applied to any system that can be written as a set of linear or pseudo-linear algebraic equations. The need for HPC resources is based on the need to determine just how typical the specific examples discussed above are in terms of parameter-space volumes. The current tracing code needs to be automated as far as possible, so that it can check the basic structure of a population flow scheme with little human intervention. It should then be possible to divide the total parameter space into regions with ‘simple’ and ‘complex’ pumping schemes.
In cases where a scheme is identified as ‘simple’, it needs to be checked to see how well it corresponds to all known ‘simple’ schemes in the parameter space. Again choices need to be made about exactly how similar two schemes have to be such that a test candidate is accepted as a version of a known scheme. If a simple scheme is too different in terms of the transitions employed, and/or the ordering of its flow cycles, it needs to be set up as a new member of the table of known schemes. The final result should be a list of detailed pumping schemes, ordered by the volume of parameter space occupied.
However, the parameter space to deduce the pumping mechanisms of the OH masers comprises 50000 points in parameter space, each of which has 80 spatial slabs, so that a total of 4 million schemes need to be checked. There is a similar number for astrophysical water and methanol maser molecules. We note that water and methanol are more complex than OH, which is likely to lead to greater time requirements when compared to OH.
Various extensions are possible if time is available: additional transitions in water and OH, modification of schemes by saturation, additional environments, for example 1720 MHz masers in supernovae, and additional molecules.
Understanding the role of stellar feedback in the formation and evolution of galaxies is a current key research topic. In order to quantify how stellar feedback regulates star formation and chemical enrichment, we need to model the interactions of supernova-driven outflows with their environments. While this has been done using 1-D semi-analytical techniques high angular resolution observations of starburst environments (e.g. M82; Fig.~1) demonstrate unequivocally that galactic-scale outflows originate from multiple sites of massive star formation in a highly inhomogeneous medium. Thus to progress, we need to apply 3-D hydrodynamic codes coupled with accurate radiative transfer modelling that can treat clumpy media to understand the evolution of starburst regions. The overall aim is to use 3-D simulations to investigate two key problems where outflows originate from multiple distributed sources: (1) the effects of multiple sites of massive star formation in a starburst region on the survival and evolution of the surrounding giant molecular cloud and the diffuse interstellar medium and (2) the development and evolution of superwinds in starburst galaxies when the outflow is formed by interacting winds from super star clusters, as observed in M82. This project will complement the awarded observational PDRA position on ``Massive stars, starbursts and feedback into the environments of galaxies’‘ (PI L.J. Smith Ref: PP/C50155X/1).
The hydrodynamic (HD) code we propose to use was developed by Lim & Mellema (2003). The code REEFA uses an adaptive mesh based upon the Reefa method described in Lim & Steffen (2001). The 3-D hydrodynamics solver uses the Flux-Vector Splitting Method and a ray tracer is used to follow the radiative transfer of ionizing photons along the grid because the transport of ionising radiation from one part of the grid to another will alter its hydrodynamical (HD) properties. The code calculates the ionization, heating and cooling rates and computes the local chemistry. A flexible parallelisation method is employed that uses a multi-thread methodology to control the load-balancing across the machine. This also allows the code to handle large irregular grids and compute solutions in a timely fashion. The 3-D radiative transfer code we propose to use is MOCASSIN.
Initially Reefa will be run over a parameter space that includes the Initial density structure e.g. a GMC clumping law and a metallicity law; and stellar evolution code parameters such as metallicity, mass range, radiation field and energy input. These provide inputs for the evolution of the radiation field and density structure computed by the Reefa for the effect of a single OB star on a clumpy GMC, typical size 10-30pc. This will be done for several masses of O and B Stars. This will then be extended to model the effects of several OB stars of different masses on the GMC structure and evolution, by using the solutions for individual O and B wind/supernova as the initial conditions for the proposed more general simulation of the effect of OB stars on the evolution of GMCs.
The resulting density and velocity field as a function of position and evolution will then be fed into MOCASSIN, which will output the continuum and spectra, and position-velocity data cubes as diagnostics. We can achieve a uniform resolution as high as 0.002 pc for these very detailed simulations of large scale high mass stellar evolution within our proposed GMC mass distribution.
Observations of M82 show that the wind is very structured close to its origin (within 1 kpc), but then the separate structures merge to form a more uniform large scale outflow at larger distances. For the M82 project we would start by using volumes sampling a few tens of clusters with the winds interacting with each other and the ISM in the disk of M82. This would be extended to include more clusters and larger dimensions using the adaptive gridding that Lim devised does the sampling very efficiently, because it is based on a ray-tracing algorithm, not a tree-code. Thus the rest of the ISM (the disk) would be sampled at a much coarser resolution such as 5-10pc and the wind interaction regions can be sampled down to 0.01-0.05 pc if required. As the system evolves the fine mesh will move with the stellar winds and the wind interaction points.
We would investigating how these structures are formed by successively using volumes of side 10, 100, 1000 pc . This would need typically 8TB of RAM and 2000 CPU cores for these runs to have 1011 cells, with 80 bytes to store the model parameters at each grid point. The individual stellar wind solutions would be allowed to interact in the proposed volumes. The adaptive grid and thus the cells available to the adaptive grid can be used to sample finely the wind interaction regions. In this way we can determine on which scale the local cluster winds form a superwind.
Cosmological simulations have traditionally found that the density profiles of dark matter halos feature a universal radial structure. These profiles can be fit by functions with a logarithmic inner slope between -1 and -1.5 and an outer slope around -3. This project aims at understanding the universality of this profile from a dynamical point of view. Generic high-resolution N-body simulations of one or several dark matter halos will be carried out (no cosmological conditions will be applied). We will use the publicly available code Gadget2. The evolution of the shape of the density profile is explored under violent relaxation, simulating the effect of merging processes. The goal is to determine which factors drive the universality of dark matter profiles. The project will be continued by the addition of gas particles (using the SPH code also available within Gadget2) in order to follow in detail the effect of adiabatic contraction of the gas and stars on the dark matter density profile.
We expect to be able to run single or few halo simulations with the highest resolution achievable. We will start with simulations comprising a few million particles but in order to obtain a robust result regarding the universality of dark matter halos, a large number of simulations must be carried out. Starting from a generic NFW profile, we plan to carry out runs for a large number of inner/outer slopes and disturb the virialised halo in various ways, including close encounters with nearby halos.
For the highest resolutions we plan to run 100 million particles per halo - implying a mass resolution of 10,000 Msolar.
Galactic chemical enrichment plays an important role in our understanding of galaxy formation and evolution. Tracking the metallicity in the ISM and in the stellar populations allows us to determine the correlation between the star formation mechanism and global properties of the galaxy such as morphology, velocity dispersion or mass. Recent work (Ferreras Silk 2003, MNRAS, 344, 455; Ferreras, Silk, Böhm & Ziegler, 2004, MNRAS 355, 64) has revealed that the star formation efficiency correlates with mass, with a well-defined threshold at a similar value as the threshold found in studies of bimodality (Kauffmann et al. 2003, MNRAS, 341, 54). Studies in more detail, exploring a larger volume of parameter space, will allow us to understand the correlation between the "microphysics" (that drives star formation within giant molecular clouds) and the global properties of galaxies. This type of simulations will greatly benefit from running parallel versions.
ParaCelsus is a code of galactic chemical enrichment combined with publicly available population synthesis models which will allow us to compare several photo-spectroscopic properties of unresolved stellar populations in order to constrain the parameter space that describes a given star formation history. This code is under development, starting from a well-established serial code (Ferreras & Silk 2000, MNRAS 316, 786). We plan to apply this code first to early-type galaxies from the spectroscopic catalogue of the Sloan Digital Sky Survey (SDSS). We will extend the studies to other morphologies from SDSS and eventually to spectra from high-redshift surveys such as DEEP and in the future to resolved stellar populations such as the data to be retrieved by GAIA.
In recent years it has become clear that the interactions between radio-loud active galaxies and their environments provides a key mechanism for efficiently injecting large amounts of energy into the intergalactic or intra-cluster media on scales between kpc and hundreds of kpc. Spectacular direct evidence of the input of AGN energy into the hot phase of the ambient medium is provided by X-ray observations of radio sources, in environments ranging from the richest nearby clusters (e.g. Fabian et al., MNRAS, 366, 417 (2006)) to the closest giant elliptical (e. g. Kraft et al., ApJ, 592, 129 (2003)). This process of energy transport, loosely referred to as `feedback’, is thought to be required to solve problems in galaxy formation and evolution (e.g. Croton et al., MNRAS, 365, 111 (2006)), to solve the well-known `cooling-flow problem’ in the centres of massive clusters (e.g. Omma & Binney, MNRAS, 350, L13 (2004)), and possibly to be implicated in the entropy excesses observed in the hot gas of galaxy groups and clusters on larger scales (e.g. Sanderson et al., MNRAS, 340, 989 (2003); Roychowdhury et al., ApJ, 615, 681 (2004)).
In spite of the widespread adoption of AGN feedback models in a variety of areas of extragalactic astrophysics, a number of crucial aspects of the AGN/environment interaction remain poorly understood: it is not actually clear how the energy put into the IGM by the AGN is dissipated, how systems manage to arrange for the energy to be dissipated where it is needed, or, crucially, how these parameters change as the radio galaxy/environment system evolves with time. Because of the long timescales involved, observations can only give us a `snapshot’ view of what a particular system is doing at a particular instant. Numerical modelling is needed to understand the long-term, and therefore the time-averaged behaviour of these systems.
A good deal of insight has already been gained from numerical work on the powerful radio sources seen at the centres of cooling flows (e.g. Heinz et al., MNRAS, 373, L65 (2006); Gardini, A&A, 464, 143 (2007)). However, the typical radio galaxy inhabits a group of galaxies rather than a rich cluster (e.g. Best et al., ApJ, 630, 740 (2005)) and this part of parameter space has so far been unexplored by simulations, although observations and simple calculations of the energetics involved (e.g. Croston et al., MNRAS, 346, 1041 (2003), MNRAS 357, 279 (2005)) show that radio sources appear to be capable of having a dramatic effect on their host groups. Our numerical work using the FLASH code (Fryxell et al., ApJS, 131, 273 (2000)) focuses on the long-term impact of radio-loud AGN on groups of galaxies, tracing the distribution of energy during and beyond the active lifetime of the AGN. We hope to determine whether the observed trends (such as the systematically higher temperatures for a given X-ray luminosity of groups hosting luminous radio galaxies: Croston et al., MNRAS, 357, 279 (2005)) can be understood in terms of the current picture of radio-source heating and radiative cooling.
We will carry out a series of FLASH simulations in 2- and 3-D in order to trace the evolution of group gas over a range of timescales and under a range of energy input conditions, with the aim of investigating the heating behaviour required to reproduce observational results. FLASH is a freely available, parallelised, adaptive mesh (AMR), hydro-dynamical code with modular support for a range of physical processes, which is ideally suited to the requirements of our project due to its ability to capture processes on a range of physical scales, and its modular structure enabling the effects of including different processes such as radiative cooling and viscosity to be straightforwardly investigated. We will use a computational domain of order 6003 kpc3, and require a minimum resolution of ~ 1 – 10 kpc. Our simulations have memory requirements of ~100-2000MBytes per processor. Based on our experience with smaller systems and results from similar FLASH projects (e.g. Brüggen et al., ApJ, 630, 740 (2005), Ruszkowki et al., ApJ, 611, 158 (2005), Pope et al., astro-ph/0702683), we estimate that ~ 25,000 CPU-hours will be required for a single 3-D simulation following group gas evolution for up to 1Gyr. We therefore require a total of ~500,000 CPU-hours in order to investigate a range of group energy injection histories for comparison with high-quality X-ray data for galaxy group samples from XMM-Newton and Chandra.
Simulated Galaxy, Galaxy Cluster and Weak Lensing Catalogues (UCL: Abdalla, Bridle, Cypriano, Lahav, Huetsi, Noss, Voigt, Weller, Yates)
The design and
exploitation of future imaging and galaxy redshift surveys like DES and DUNE or
surveys which select cluster via their Sunyaev-Zel’dovich decrement, like
Planck, requires the production of realistic mock catalogues. This in turn
requires a detailed study and implementation of non-linear and astrophysical
processes in the numerical simulations. For example the use of galaxy cluster
counts, as a cosmological tool requires an understanding of the statistics of
the selection function. In the case of Sunyaev-Zel’dovich cluster counts, this
is the relation between the measured flux decrement in the radio waveband and
the mass of the cluster. In order to understand the statistical distribution of
these relations and quantities the simulation of a large sample of clusters (a
few hundreds) is required in each redshift bins, of width 0.1, out to a maximum
redshift between 1.5-2. We recently began a semi-analytical implementation of
such an analysis (Bode et al., astro-ph/0612663). However a more realistic
implementation requires the hydrosimulation of clusters of galaxies, with
varying astrophysical parameters over the entire sample. We will vary
parameters such as the heating due to active galactic nuclei, cooling, gas
fraction in stars etc. Depending on the resolution, number of particles, the
size of the galaxy cluster, the re-simulation of a single cluster can take up
to 2-3 days on 32 cores with the GADGET2 code. GADGET2 is a publicly available SPH
hydro-code. We will further explore AMR codes such as ENZO for this task. In
order to obtain a representative sample, we require simulations of about 5,000
clusters in the observed redshift range.
The bending of light by gravity (cosmic lensing) is one of the most promising probes of the mysterious dark energy which seems to dominate the Universe. The apparent existence of dark energy is unexplained by physics and may ultimately lead to a modification of Einstein’s theory of general relativity. The majority of work in cosmology is now focused on dark energy. Cosmic lensing is a major driver of upcoming surveys, including Dark Energy Survey (funded in part by PPARC) and Dark Universe Explorer in which UCL is heavily involved. However it is a relatively new field of research and there are many questions still to address. A major problem is that large n-body simulations are needed to predict the signal we will observe. To constrain different dark energy models it is necessary to perform simulations for many different sets of cosmological parameters. Further, to place statistical uncertainties on results it is necessary to perform many simulations for each set of cosmological parameters. Finally it is important to consider the effect of the baryon physics on the observed statistics, or the results may be biased.
In the absence
of spectroscopic data, redshifts of galaxies may be estimated using multi-band
photometry, which may be thought of as very low-resolution spectroscopy. UCL astronomers have developed the ANNz
method, which utilises Artificial Neural Networks Collister & Lahav (2004),
and was made freely available. Abdalla, Banerji and Lahav have applied ANNz and
other methods for feasibility studies for the international DES, VISTA and DUNE
surveys to address e.g. how the addition on near infrared photometry can
improve the photo-z accuracy. Their results have had major impact on the design
of these surveys. Our recent study (Abdalla et al. 2007) shows that photo-z
results also depend on the assumed mock catalogues. We plan to use the Miracle
facility to generate samples of billions of galaxies with spectral properties,
which will mimic observed galaxies (e.g. from the GOODS sample), but with added
assumptions on star formation history. This will allow us to improve photo-z
accuracy and to assess its impact on deriving Dark Energy parameters from Weak
Page last modified on 24 feb 11 18:37 by Dugan K Witherick