us planck data analysis review 1 christopher cantalupous planck data analysis review 9–10 may 2006...

15
US Planck Data Analysis US Planck Data Analysis Review Review Christopher Cantalupo US Planck Data Analysis Review 9–10 May 2006 CTP Working Group CTP Working Group Presented by Christopher Cantalupo 5/9/06 Based on slides by Charles Lawrence and Eric Hivon

Upload: hannah-taylor

Post on 02-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

US Planck Data Analysis ReviewUS Planck Data Analysis Review

Christopher Cantalupo US Planck Data Analysis Review • 9–10 May 2006

CTP Working GroupCTP Working Group

Presented by Christopher Cantalupo

5/9/06

Based on slides by Charles Lawrence and Eric Hivon

2 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

The Role of CTPThe Role of CTP

Develop and test methods for critical data processing steps:

– Map making

– Power spectrum estimation

– Parameter estimation

Contributions to date:

– Developed infrastructure needed to generate and analyze large simulations. 1 year of data from 12 detectors vs. weeks of data from single detector. Terrabytes of data that now include many complexities of real world data.

– Investigated map making on large simulations.

– NERSC’s computational facilities have enabled this work.

3 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

A Series of SimulationsA Series of Simulations

CTP WG has has biweekly telecon since 2002.

Week-long working meetings approximately yearly.

Special priority use of the NERSC facilities for some meetings.

New simulations produced for each meeting:

– Cambridge, June 2002

– Cambridge, January 2003 Poutanen et al, A&A 2006

– Garching, September 2003

– Helsinki, June 2004 Ashdown et al, A&A submitted (or almost)

– Paris, June 2005 Paper in preparation

– Trieste, May 2006

The Trieste simulation set will test many systematic and foreground issues.

4 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Compared SoftwareCompared Software

GLS algorithm based map making software

– ROMA

– MADmap

– MapCUMBA

Destriping algorithm based map making software

– Polar

– MADAM

– Springtide

5 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Summary of SimulationsSummary of Simulations

6 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Cambridge SummaryCambridge Summary

Single LFI 100 GHz detector.

One year mission time: TOD’s generated with LevelS.

No polarization, only intensity data.

Symmetric and elliptical beams (FWHM 10 arcminutes).

Slow precession cycloidal scanning: 10 degree amplitude, 6 month period.

CMB, foregrounds and noise simulated.

Noise was 1/f + white with realistic LFI instrument parameters.

Map making analysis software tested:

– ROMA, MapCUMBA, MADmap, Destriping.

7 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Cambridge Key ObservationsCambridge Key Observations

Maps of different GLS codes nearly identical.

Destriper maps have slightly higher noise (~ 0.2 %) than GLS maps.

Map making error due to signal (pixel noise) was higher in GLS maps.

Foreground caused high systematic errors in some pixels of GLS maps

Resource consumption:

– GLS software used 192 to 256 processors for approximately 10 wall clock minutes.

– Destriping software used a single processor for approximately 7 minutes.

8 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Helsinki SummaryHelsinki Summary

HFI 217 GHz channel.

Intensity and polarization.

Input: CMB only (WMAP-constrained template), no B mode.

Circular Gaussian beams (FWHM ~5 arcminutes).

– Different FWHM for different detectors.

One year of data generated using Level-S pipeline, 4 detectors 700 gigabytes.

Two scanning strategies simulated:

– Cycloidal: seven degree amplitude, six month period.

– Nominal: spin axis stays in ecliptic plane.

Map making software tested:

– POLAR, MADAM, Springtide, MADmap, MapCUMBA, ROMA.

9 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Comparison of Generated MapsComparison of Generated Maps

Output maps

– 1 unpolarized detector (217-4), Nside = 4096 resolution.

– 4 polarized detectors (217-5a, 5b, 7a and 7b), Nside = 2048 resolution.

Study effects of modifying inputs and outputs (217-4 only)

– Store pointing in double or single precision.

– Add bolometer time constant and sampling.

– Make map at different resolution than input maps.

Noisy maps

– Discovered that Noise parameters in Level S were incorrect.

Compute statistics of residual maps:

– Mean, maximum, minimum, and RMS values.

– Power spectra.

10 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Helsinki Key ObservationsHelsinki Key Observations

Large simulations are difficult to manage and process.

– POLAR/MADAM/Springtide: 512 processors, ~32 minutes wall clock time.

– MADmap: 2048 processors, ~90 minute wall clock time.

Comparison of methods

– GLS slightly lower map noise, larger error due to pixel noise.

– Destriping with very short baselines produces nearly identical results to GLS map making.

11 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Helsinki Key Observations ContinuedHelsinki Key Observations Continued

Handling of degenerate pixels

– Some pixels are not observed with a sufficient variety of polarization orientations.

– Use block-diagonal preconditioner to reject pixels with poorly constrained Stokes parameters.

Comparison of scanning strategies (based on four detectors).

– RMS statistic favors nominal strategy, but my small margin.

– Cycloidal has ~ 1% more “degenerate pixels” (too few samples).

– Cycloidal covers whole sky. This outweighs small disadvantages.

– Analysis with all twelve 217 GHz detectors is ongoing.

12 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Paris SimulationsParis Simulations

Four LFI 30 GHz detectors.

One year mission time: TOD’s generated with LevelS.

Intensity and polarization.

Symmetric beams (FWHM ~33 arcminutes and similar for all detectors).

Slow precession cycloidal scanning: 7.5 degree amplitude, six month period.

CMB, foregrounds, dipole, and noise were simulated.

Noise was 1/f + white with realistic LFI instrument parameters.

Map making software tested:

– ROMA, MapCUMBA, MADmap, POLAR, MADAM, Springtide.

Residual maps and their statistics were compared.

Residual map = output map - binned noiseless map.

Power spectra were compared.

13 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Paris Key ObservationsParis Key Observations

Maps of different GLS codes produce nearly identical output.

POLAR (destriper) maps had marginally higher noise (by ~ 0.5 %) than GLS maps.

MADAM maps with short baselines (1.2 seconds) had nearly as low noise as GLS maps.

POLAR one year map used 256 processors, for eight minutes of wall clock time.

MapCumba one year map used 384 processors for 30 minutes of wall clock time.

WMAP 1 year Ka band (32 GHz) temperature map has ~ three times higher noise than an LFI 30 GHz 1 year temperature map.

ADD SOME PICTURES OR PLOTS

14 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Trieste SimulationsTrieste Simulations

Continue with LFI 30 GHz data (four detectors).

Asymmetric (elliptical) beams, different for each detector.

Sampling detector time constant.

20 K cooler fluctuation noise.

Point sources (resolved and unresolved), SZ, synchrotron, dust, free-free, dipole.

15 US Planck Data Analysis Review • 9–10 May 2006Christopher Cantalupo

Future WorkFuture Work

Comparison of power spectrum estimation software.

Various important systematic effects, in cooperation with SEWG.

– More asymmetric beam issues.

Help with end to end testing.

TBD.