Instrumentation and Methods for Astrophysics
Detector and telescope design, astronomical data analysis techniques and methods.
Looking for a broader view? This category is part of:
Detector and telescope design, astronomical data analysis techniques and methods.
Looking for a broader view? This category is part of:
The Nancy Grace Roman Space Telescope will carry out a wide-field imaging and slitless spectroscopic survey of Type Ia Supernovae to improve our understanding of dark energy. Crucial to this endeavor is obtaining supernova spectra uncontaminated by light from their host galaxies. However, obtaining such spectra is made more difficult by the inherent problem in wide-field slitless spectroscopic surveys: the blending of spectra of close objects. The spectrum of a supernova will blend with the host galaxy, even from regions distant from the supernova on the sky. If not properly removed, this contamination will introduce systematic bias when the supernova spectra are later used to determine intrinsic supernova parameters and to infer the parameters of dark energy. To address this problem we developed an algorithm that makes use of the spectroscopic observations of the host galaxy at all available observatory roll angles to reconstruct a three-dimensional (3d; 2d spatial, 1d spectral) representation of the underlying host galaxy that accurately matches the 2d slitless spectrum of the host galaxy when projected to an arbitrary rotation angle. We call this ``scene reconstruction''. The projection of the reconstructed scene can be subtracted from an observation of a supernova to remove the contamination from the underlying host. Using simulated Roman data, we show that our method has extremely small systematic errors and significantly less random noise than if we subtracted a single perfectly aligned spectrum of the host obtained before or after the supernova was visible.
2601.04939The search for the sources of ultra-high-energy cosmic rays (UHECRs) using high-energy neutrinos represents a frontier in high-energy astrophysics. However, a critical bottleneck remains: the ability to rapidly survey the sizable sky areas defined by the localization uncertainties of neutrino detectors and to provide rapid spectroscopic classification of the multitude of optical transients found within them. By deploying a large field-of-view with high-multiplex Multi-Object Spectroscopy (MOS) on a large aperture telescope, one can instantaneously cover neutrino error circles, thus providing crucial spectroscopic classifications of potential counterparts discovered, for example, by the Vera C. Rubin Observatory (LSST) with unprecedented efficiency. Furthermore, simultaneous operation of a giant panoramic central Integral Field Spectrograph (IFS) would allow for detailed kinematic and environmental characterization of primary candidates. This facility would unlock deep synergies between next-generation neutrino telescopes (IceCube-Gen2, KM3NeT) and gamma-ray observatories (CTAO), transforming unique multi-messenger alerts into a comprehensive physical understanding.
Self-interacting dark matter (SIDM) is a well-motivated extension of cold dark matter that can modify halo structure on galactic and group scales while remaining consistent with large-scale structure. However, practical SIDM work often requires bridging several layers, including microphysical scattering models, velocity-dependent effective cross sections, phenomenological astrophysical constraints, and (separately) data-driven halo fits, such as rotation curves. In this paper, we describe \texttt{sidmkit}, a transparent and reproducible Python package designed to support SIDM ``micro$\rightarrow$macro'' calculations and to provide a robust batch pipeline for fitting rotation curves in the SPARC data. On the SIDM side, \texttt{sidmkit} implements velocity-dependent momentum-transfer cross sections for a Yukawa interaction using standard analytic approximations (Born, classical, and Hulthén-based) with a numerical partial-wave option for spot checks. It also provides consistent velocity-moment averaging for Maxwellian relative speeds, scattering-rate utilities, and curated literature \emph{summary} constraints for regression tests and exploratory scans. On the rotation-curve side, we implement bounded non-linear least squares fits of NFW and Burkert halo models to SPARC baryonic decompositions, with optional mass-to-light priors and information-criterion summaries (AIC/BIC). For the demonstration dataset, we process 191 \texttt{rotmod} galaxies (LTG+ETG bundles) and fit both NFW and Burkert models (382 total fits). We find that Burkert is preferred by $Δ\mathrm{BIC} > 0$ for $65.4\%$ of galaxies, with ``strong'' preference ($Δ\mathrm{BIC}>6$) in $32.5\%$ of galaxies;
Extensive astronomical surveys, like those conducted with the {\em Chandra} X-ray Observatory, detect hundreds of thousands of unidentified cosmic sources. Machine learning (ML) methods offer an efficient, probabilistic approach to classify them, which can be useful for making discoveries and conducting deeper studies. In earlier work, we applied the LightGBM (ML model) to classify 277,069 {\em Chandra} point sources into eight categories: active galactic nuclei (AGN), X-ray emitting stars, young stellar objects (YSO), high-mass X-ray binaries, low-mass X-ray binaries, ultraluminous X-ray sources, cataclysmic variables, and pulsars. In this work, we present the classification table of 54,770 robustly classified sources (over $3σ$ confidence), including 14,066 sources at $>4σ$ significance. To ensure classification reliability and gain a deeper insight, we investigate the multiwavelength feature relationships learned by the LightGBM model, focusing on AGNs, Stars, and YSOs. We employ Explainable Artificial Intelligence (XAI) techniques, specifically, SHapley Additive exPlanations (SHAP), to quantify the contribution of individual features and their interactions to the predicted classification probabilities. Among other things, we find infrared-optical and X-ray decision boundaries for separating AGN/Stars, and infrared-X-ray boundaries for YSOs. These results are crucial for estimating object classes even with limited multiwavelength data. This study represents one of the earliest applications of XAI to large-scale astronomical datasets, demonstrating ML models' potential for uncovering physically meaningful patterns in data in addition to classification. Finally, our publicly available, extensive, and interactive catalogue will be helpful to explore the contributions of features and their combinations in greater detail in the future.
Multiple-frequency periodograms -- based on time series models consisting of two or more independent sinusoids -- have long been discussed. What is new here is the presentation of a practical, simple-to-use computational framework implementing this concept. Our algorithms have super resolution that evades the Rayleigh criterion, as well as provision for statistical weighting and tapering. They can be used for essentially any time series (e.g. time-tagged events or point measurements) with arbitrary sampling -- even or uneven. Examples of super resolution of synthetic data, sunspot numbers, and the rich pulsations of white dwarf J0135+5722, demonstrate practical applications. Appendices derive generalized periodograms using an arbitrary number of arbitrary basis functions (following Bretthorst, 1988)and define several examples of non-sinusoidal bases for these ``omnigrams.'' Application beyond the frequency domain is demonstrated with an autoregressive model exhibiting super resolution in the time domain. A GitHub repository containing omnigram code, and symbolic algebra scripts for generating it, will soon be available.
2601.03966Dynamic Spectrum Sharing (DSS) is increasingly promoted as a key element of modern spectrum policy, driven by the rising demand from commercial wireless systems and advances in spectrum access technologies. Passive radio sciences, including radio astronomy, Earth remote sensing, and meteorology, operate under fundamentally different constraints. They rely on exceptionally low interference spectrum and are highly vulnerable to even brief radio frequency interference. We examine whether DSS can benefit passive services or whether it introduces new failure modes and enforcement challenges. We propose just-in-time quiet zones (JITQZ) as a mechanism for protecting high value observations and assess hybrid frameworks that preserve static protection for core passive bands while allowing constrained dynamic access in adjacent frequencies. We analyze the roles of propagation uncertainty, electromagnetic compatibility constraints, and limited spectrum awareness. Using a game theoretic framework, we show why non-cooperative sharing fails, identify conditions for sustained cooperation, and examine incentive mechanisms including pseudonymetry-enabled attribution that promote compliance. We conclude that DSS can support passive radio sciences only as a high-reliability, safety-critical system. Static allocations remain essential, and dynamic access is viable only with conservative safeguards and enforceable accountability.
We identify a polarization rotation systematic in the far field beams of refractive cosmic microwave background (CMB) telescopes caused by differential transmission in anti-reflection (AR) coatings of optical elements. This systematic was identified following the development of a hybrid physical optics method that incorporates full-wave electromagnetic simulations of AR coatings to model the full polarization response of refractive systems. Applying this method to a two-lens CMB telescope with non-ideal AR coating, we show that polarization-dependent transmission can produce a rotation of the far-field polarization angle that varies across the focal plane with a typical amplitude of 0.05-0.5 degrees. If ignored in analysis, this effect can produce temperature to polarization leakage and Stokes Q/U mixing.
Optical polarimetry provides information on the geometry of the emitting region, the magnetic field configuration and the properties of dust in astrophysical sources. Current state-of-the-art instruments typically have a small field of view (FoV), which poses a challenge for conducting wide surveys. We propose the construction of the Large Array Survey Telescope Polarization Node (LAST-P), a wide-field array of optical polarimeters. LAST-P is designed for high-cadence ($\lesssim 1$ day) polarization monitoring of numerous astrophysical transients, such as the early phases of gamma-ray bursts, supernovae, and novae. Furthermore, LAST-P will facilitate the creation of extensive polarization catalogs for X-ray binaries and white dwarfs, alongside a large FoV study of the interstellar medium. In survey mode, LAST-P will cover a FoV of 88.8 deg$^2$. With a 15 x 1-minute exposure, the instrument will be capable of measuring polarization of sources as faint as Gaia Bp-magnitude $\sim$20.9. The precision on the linear polarization degree (PD) will reach 0.7\%, 1.5\%, and 3.5\% for sources with magnitudes 17, 18, and 19, respectively, for a seeing of 2.7 arcsec, air mass of about 1 for observations in dark locations. We propose three distinct non-simultaneous survey strategies, among them an active galactic nuclei (AGN) strategy for long-term monitoring of $\sim$200 AGN with $<$1-day cadence. In this paper, we present the predicted sensitivity of the instrument and outline the various science cases it is designed to explore.
Binary black holes (BBHs) exhibiting spin-induced orbital precession offer unique insight into compact-binary formation channels, cosmology, and tests of general relativity. We conduct a dedicated search for precessing BBHs in Advanced LIGO's third observing run (O3) using the harmonic decomposition method of precessing waveforms. We introduce a novel scheme to reduce the number of filters in a harmonic search. With our new approach, our template bank requires $5\times$ fewer filters compared to another proposed precessing search in the same region. We do not find any new significant events. Our new search method achieves up to $\sim 28\%$ improvement in sensitivity and up to $5\times$ lower computational costs compared to existing precessing search pipelines. Our method enables scalable, sensitive searches for precessing BBHs in future gravitational-wave observing runs.
Thermal electron measurements in space plasmas typically suffer at low energies from spacecraft emissions of photo- and secondary electrons and from charging of the spacecraft body. We examine these effects by use of numerical simulations in the context of electron measurements acquired by the Electron Analyser System (SWA-EAS) on board the Solar Orbiter mission. We employed the Spacecraft Plasma Interaction Software to model the interaction of the Solar Orbiter spacecraft with solar wind plasma and we implemented a virtual detector to simulate the measured electron energy spectra as observed in situ by the SWA-EAS experiment. Numerical simulations were set according to the measured plasma conditions at 0.3~AU. We derived the simulated electron energy spectra as detected by the virtual SWA-EAS experiment for different electron populations and compared these with both the initial plasma conditions and the corresponding real SWA-EAS data samples. We found qualitative agreement between the simulated and real data observed in situ by the SWA-EAS detector. Contrary to other space missions, the contamination by cold electrons emitted from the spacecraft is seen well above the spacecraft potential energy threshold. A detailed analysis of the simulated electron energy spectra demonstrates that contamination above the threshold is a result of cold electron fluxes emitted from distant spacecraft surfaces. The relative position of the break in the simulated spectrum with respect to the spacecraft potential slightly deviates from that in the real observations. This may indicate that the real potential of the SWA-EAS detector with respect to ambient plasma differs from the spacecraft potential value measured on board. The overall contamination is shown to be composed of emissions from a number of different sources and their relative contribution varies with the ambient plasma conditions.
We present MARVEL (https://ligogpt.mit.edu/marvel), a locally deployable, open-source framework for domain-aware question answering and assisted scientific research. It is designed to address the increasing demands of a digital assistant for scientific groups that can read highly technical data, cite precisely, and operate within authenticated networks. MARVEL combines a fast path for straightforward queries with a more deliberate DeepSearch mode that integrates retrieval-augmented generation and Monte Carlo Tree Search. It explores complementary subqueries, allocates more compute to promising branches, and maintains a global evidence ledger that preserves sources during drafting. We applied this framework in the context of gravitational-wave research related to the Laser Interferometer Gravitational-wave Observatory. Answers are grounded in a curated semantic index of research literature, doctoral theses, LIGO documents, and long-running detector electronic logbooks, with targeted web searches when appropriate. Because direct benchmarking against commercial LLMs cannot be performed on private data, we evaluated MARVEL on two publicly available surrogate datasets that capture comparable semantic and technical characteristics. On these benchmarks, MARVEL matches a GPT-4o mini baseline on literature-centric queries and substantially outperforms it on detector-operations content, where domain retrieval and guided reasoning are decisive. By making the complete framework and evaluation datasets openly available, we aim to provide a reproducible foundation for developing domain-specific scientific assistants.
Stage-IV dark energy wide-field surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), will observe an unprecedented number density of galaxies. As a result, the majority of imaged galaxies will visually overlap, a phenomenon known as blending. Blending is expected to be a leading source of systematic error in astronomical measurements. To mitigate this systematic, we propose a new probabilistic method for detecting, deblending, and measuring the properties of galaxies, called the Bayesian Light Source Separator (BLISS). Given an astronomical survey image, BLISS uses convolutional neural networks to produce a probabilistic astronomical catalog by approximating the posterior distribution over the number of light sources, their centroids' locations, and their types (galaxy vs. star). BLISS additionally includes a denoising autoencoder to reconstruct unblended galaxy profiles. As a first step towards demonstrating the feasibility of BLISS for cosmological applications, we apply our method to simulated single-band images whose properties are representative of year-10 LSST coadds. First, we study each BLISS component independently and examine its probabilistic output as a function of SNR and degree of blending. Then, by propagating the probabilistic detections from BLISS to its deblender, we produce per-object flux posteriors. Using these posteriors yields a substantial improvement in aperture flux residuals relative to deterministic detections alone, particularly for highly blended and faint objects. These results highlight the potential of BLISS as a scalable, uncertainty-aware tool for mitigating blending-induced systematics in next-generation cosmological surveys.
We present a new fiber assignment algorithm for a robotic fiber positioner system in multi-object spectroscopy. Modern fiber positioner systems typically have overlapping patrol regions, resulting in the number of observable targets being highly dependent on the fiber assignment scheme. To maximize observable targets without fiber collisions, the algorithm proceeds in three steps. First, it assigns the maximum number of targets for a given field of view without considering any collisions between fiber positioners. Then, the fibers in collision are grouped, and the algorithm finds the optimal solution resolving the collision problem within each group. We compare the results from this new algorithm with those from a simple algorithm that assigns targets in descending order of their rank by considering collisions. As a result, we could increase the overall completeness of target assignments by 10% with this new algorithm in comparison with the case using the simple algorithm in a field with 150 fibers. Our new algorithm is designed for the All-sky SPECtroscopic survey of nearby galaxies (A-SPEC) based on the K-SPEC spectrograph system, but can also be applied to similar fiber-based systems with heavily overlapping fiber positioners.
We developed a SpaceWire-based data acquisition (DAQ) system for the FOXSI-4 and FOXSI-5 sounding rocket experiments, which aim to observe solar flares with high sensitivity and dynamic range using direct X-ray focusing optics. The FOXSI-4 mission, launched on April 17, 2024, achieved the first direct focusing observation of a GOES M1.6 class solar flare with imaging spectroscopy capabilities in the soft and hard X-ray energy ranges, using a suite of advanced detectors, including two CMOS sensors, four CdTe double-sided strip detectors (CdTe-DSDs), and a Quad-Timepix3 detector. To accommodate the high photon flux from a solar flare and these diverse detector types, a modular DAQ network architecture was implemented based on SpaceWire and the Remote Memory Access Protocol (RMAP). This modular architecture enabled fast, reliable, and scalable communication among various onboard components, including detectors, readout boards, onboard computers, and telemetry systems. In addition, by standardizing the communication interface and modularizing each detector unit and its associated electronics, the architecture also supported distributed development among collaborating institutions, simplifying integration and reducing overall complexity. To realize this architecture, we developed FPGA-based readout boards (SPMU-001 and SPMU-002) that support SpaceWire communication for high-speed data transfer and flexible instrument control. In addition, a real-time ground support system was developed to handle telemetry and command operations during flight, enabling live monitoring and adaptive configuration of onboard instruments in response to the properties of the observed solar flare. The same architecture is being adopted for the upcoming FOXSI-5 mission, scheduled for launch in 2026.
Experiments designed to detect ultra-high energy (UHE) neutrinos using radio techniques are also capable of detecting the radio signals from cosmic-ray (CR) induced air showers. These CR signals are important both as a background and as a tool for calibrating the detector. The Askaryan Radio Array (ARA), a radio detector array, is designed to detect UHE neutrinos. The array currently comprises five independent stations, each instrumented with antennas deployed at depths of up to 200 meters within the ice at the South Pole. In this study, we focus on a candidate event recorded by ARA Station 2 (A2) that shows features consistent with a downward-going CR-induced air shower. This includes distinctive double-pulse signals in multiple channels, interpreted as geomagnetic and Askaryan radio emissions arriving at the antennas in sequence. To investigate this event, we use detailed simulations that combine a modern ice-impacting CR shower simulation framework, FAERIE, with a realistic detector simulation package, AraSim. We will present results for an optimization of the event topology, identified through simulated CR showers, comparing the vertex reconstruction of both the geomagnetic and Askaryan signals of the event, as well as the observed time delays between the two signals in each antenna.
We present a proposal for a nanomechanical membrane resonator integrated into a moderate-finesse ($\mathcal{F}\sim 10$) optical cavity as a versatile platform for detecting high-frequency gravitational waves and vector dark matter. Gravitational-wave sensitivity arises from cavity-length modulation, which resonantly drives membrane motion via the radiation-pressure force. This force also enables in situ tuning of the membrane's resonance frequency by nearly a factor of two, allowing a frequency coverage from 0.5 to 40 kHz using six membranes. The detector achieves a peak strain sensitivity of $2\times 10^{-23}/\sqrt{\text{Hz}}$ at 40 kHz. Using a silicon membrane positioned near a gallium-arsenide input mirror additionally provides sensitivity to vector dark matter via differential acceleration from their differing atomic-to-mass number ratios. The projected reach surpasses the existing limits in the range of $2\times 10^{-12}$ to $2\times 10^{-10}$ $\text{eV}/c^2$ for a one-year measurement. Consequently, the proposed detector offers a unified approach to searching for physics beyond the Standard Model, probing both high-frequency gravitational waves and vector dark matter.
The Lazuli Space Observatory is a 3-meter aperture astronomical facility designed for rapid-response observations and precision astrophysics across visible to near-infrared wavelengths (400-1700 nm bandpass). An off-axis, freeform telescope delivers diffraction-limited image quality (Strehl $>$0.8 at 633 nm) to three instruments across a wide, flat focal plane. The three instruments provide complementary capabilities: a Wide-field Context Camera (WCC) delivers multi-band imaging over a 35' $\times$ 12' footprint with high-cadence photometry; an Integral Field Spectrograph (IFS) provides continuous 400-1700 nm spectroscopy at R $\sim$ 100-500 for stable spectrophotometry; and an ExtraSolar Coronagraph (ESC) enables high-contrast imaging expected to reach raw contrasts of $10^{-8}$ and post-processed contrasts approaching $10^{-9}$. Operating from a 3:1 lunar-resonant orbit, Lazuli will respond to targets of opportunity in under four hours--a programmatic requirement designed to enable routine temporal responsiveness that is unprecedented for a space telescope of this size. Lazuli's technical capabilities are shaped around three broad science areas: (1) time-domain and multi-messenger astronomy, (2) stars and planets, and (3) cosmology. These capabilities enable a potent mix of science spanning gravitational wave counterpart characterization, fast-evolving transients, Type Ia supernova cosmology, high-contrast exoplanet imaging, and spectroscopy of exoplanet atmospheres. While these areas guide the observatory design, Lazuli is conceived as a general-purpose facility capable of supporting a wide range of astrophysical investigations, with open time for the global community. We describe the observatory architecture and capabilities in the preliminary design phase, with science operations anticipated following a rapid development cycle from concept to launch.
We have used 23 years of Hubble Space Telescope ACS/SBC data to study what background levels are encountered in practice and how much they vary. The backgrounds vary considerably, with F115LP, F122M, F125LP, PR110L, and PR130L all showing over an order of magnitude of variation in background between observations, apparently due to changes in airglow. The F150LP and F165LP filters, which are dominated by dark rate, not airglow, exhibit a far smaller variation in backgrounds. For the filters where the background is generally dominated by airglow, the backgrounds measured from the data are significantly lower than what the ETC predicts (as of ETC v33.2). The ETC predictions for `average' airglow are greater than the median of our measured background values by factors of 2.51, 2.64, 105, and 3.64, for F115LP, F122M, F125LP, and F140LP, respectively. A preliminary analysis suggests this could be due to certain OI airglow lines usually being fainter than expected by the ETC. With reduced reduced background levels, the shorter-wavelength SBC filters can conduct background-limited observations much more rapidly than had previously been expected. As of ETC v34.1, a new option will be included for SBC calculations, allowing users to employ empirical background percentiles to estimate required exposure times.
Astronomical measurements are often integrated over finite exposures, which can obscure latent variability on comparable timescales. Correctly accounting for exposure integration with Gaussian Processes (GPs) in such scenarios is essential but computationally challenging: once exposure times vary or overlap across measurements, the covariance matrix forfeits any quasiseparability, forcing O($N^2$) memory and O($N^3$) runtime costs. Linear Gaussian state space models (SSMs) are equivalent to GPs and have well-known O($N$) solutions via the Kalman filter and RTS smoother. In this work, we extend the GP-SSM equivalence to handle integrated measurements while maintaining scalability by augmenting the SSM with an integral state that resets at exposure start times and is observed at exposure end times. This construction yields exactly the same posterior as a fully integrated GP but in O($N$) time on a CPU, and is parallelizable down to O($N/T + \log T$) on a GPU with $T$ parallel workers. We present smolgp (State space Model for O(Linear/log) GPs), an open-source Python/JAX package offering drop-in compatibiltiy with tinygp while supporting both standard and exposure-aware GP modeling. As SSMs provide a framework for representing general GP kernels via their series expansion, smolgp also brings scalable performance to many commonly used covariance kernels in astronomy that lack quasiseparability, such as the quasiperiodic kernel. The substantial performance boosts at large $N$ will enable massive multi-instrument cross-comparisons where exposure overlap is ubiquitous, and unlocks the potential for analyses with more complex models and/or higher dimensional datasets.
Physically motivated Gaussian process (GP) kernels for stellar variability, like the commonly used damped, driven simple harmonic oscillators that model stellar granulation and p-mode oscillations, quantify the instantaneous covariance between any two points. For kernels whose timescales are significantly longer than the typical exposure times, such GP kernels are sufficient. For time series where the exposure time is comparable to the kernel timescale, the observed signal represents an exposure-averaged version of the true underlying signal. This distinction is important in the context of recent data streams from Extreme Precision Radial Velocity (EPRV) spectrographs like fast readout stellar data of asteroseismology targets and solar data to monitor the Sun's variability during daytime observations. Current solar EPRV facilities have significantly different exposure times per-site, owing to the different design choices made. Consequently, each instrument traces different binned versions of the same "latent" signal. Here we present a GP framework that accounts for exposure times by computing integrated forms of the instantaneous kernels typically used. These functions allow one to predict the true latent oscillation signals and the exposure-binned version expected by each instrument. We extend the framework to work for instruments with significant time overlap (i.e., similar longitude) by including relative instrumental drift components that can be predicted and separated from the stellar variability components. We use Sun-as-a-star EPRV datasets as our primary example, but present these approaches in a generalized way for application to any dataset where exposure times are a relevant factor or combining instruments with significant overlap.