Skip to main content

2024 | Buch

Dynamic Data Driven Applications Systems

4th International Conference, DDDAS 2022, Cambridge, MA, USA, October 6–10, 2022, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Conference on Dynamic Data Driven Applications Systems, DDDAS 2022, which took place in Cambridge, MA, USA, during October 6–10, 2022.

The 31 regular papers in the main track and 5 regular papers from the Wildfires panel, as well as one workshop paper, were carefully reviewed and selected for inclusion in the book. They were organized in following topical sections: DDAS2022 Main-Track Plenary Presentations; Keynotes; DDDAS2022 Main-Track: Wildfires Panel; Workshop on Climate, Life, Earth, Planets.

Inhaltsverzeichnis

Frontmatter

Introduction to the DDDAS2022 Conference

Frontmatter
Introduction to the DDDAS2022 Conference Infosymbiotics/Dynamic Data Driven Applications Systems

The 4th International DDDAS 2022 Conference, convened on October 6–10, featured presentations on Dynamic Data Driven Applications Systems (DDDAS)-based approaches and capabilities, in a wide set of areas, with an overarching theme of “InfoSymbiotics/DDDAS for human, environmental and engineering sustainment”. The topics included aerospace mechanics and space systems, networked communications and autonomy, biomedical and environmental systems, and featured recent techniques in generative Artificial Intelligence, theoretical Machine Learning, and dynamic Digital Twins. Capturing the tenets of the DDDAS paradigm across these areas, solutions were presented to address challenges in systems-of-systems’ approaches providing analysis, assessments and enhanced capabilities in the presence of complex and big data. The conference comprised of the main track that featured 31 plenary presentations of peer reviewed papers, five keynotes, an invited talk, and a panel on wildfires monitoring. In conjunction with the main track of the DDDAS conference, a Workshop on Climate and Life, Earth, Planets (CLEPs) was conducted, which featured 20 presentations on environmental challenges and a panel on Seismic and Nuclear Explosion monitoring. In addition to the papers of the plenary presentations in the main track of the conference, the DDDAS2022 Proceedings feature an overview of the conference, a synopsis of the main-track papers, and summaries of the keynotes and the wildfires panels followed by corresponding contributed papers by the the speakers in these sessions. Additional information and archival materials, including the presentations’ slides and recordings, are available in the DDDAS website: www.1dddas.org .

Erik Blasch, Frederica Darema

Main-Track Plenary Presentations - Aerospace

Frontmatter
Generalized Multifidelity Active Learning for Gaussian-process-based Reliability Analysis

Efficient methods for achieving active learning in complex physical systems are essential for achieving the two-way interaction between data and models that underlies DDDAS. This work presents a two-stage multifidelity active learning method for Gaussian-process-based reliability analysis. In the first stage, the method allows for the flexibility of using any single-fidelity acquisition function for failure boundary identification when selecting the next sample location. We demonstrate the generalized multifidelity method using the existing acquisition functions of expected feasibility, U-learning, targeted integrated mean square error acquisition functions, or their a priori Monte Carlo sampled variants. The second stage uses a weighted information-gain-based criterion for the fidelity model selection. The multifidelity method leads to significant computational savings over the single-fidelity versions for real-time reliability analysis involving expensive physical system simulations.

Anirban Chaudhuri, Karen Willcox
Essential Properties of a Multimodal Hypersonic Object Detection and Tracking System

Hypersonic object detection and tracking is a necessity for the future of commercial aircraft, space exploration, and air defense sectors. However, hypersonic object detection and tracking in practice is a complex task that is limited by physical, geometrical, and sensor constraints. Atmospheric absorption and scattering, line of sight obstructions, and plasma sheaths around hypersonic objects are just a few factors as to why an adaptive, multiplatform, multimodal system are required for hypersonic object detection and tracking. We review recent papers on detection and communication of hypersonic objects which model hypersonic objects with various solid body geometries, surface materials, and flight patterns to examine electromagnetic radiation interactions of the hypersonic object in the atmospheric medium, as a function of velocity, altitude, and heading. The key findings from these research papers are combined with simple gas and thermal dynamics classical physics models to establish baselines for hypersonic object detection. In this paper, we make a case for the necessity of an adaptive multimodal low-earth orbit network consisting of a constellation of satellites communicating with each other in real time for hypersonic detection and tracking.

Zachary Mulhollan, Marco Gamarra, Anthony Vodacek, Matthew Hoffman
Dynamic Airspace Control via Spatial Network Morphing

In the coming years, a plethora of new and autonomous aircraft will fill the airspace, approaching a density similar to the ground traffic below. At the same time, human pilots that use the prevailing navigational tools and decision processes will continue to fly along flexible trajectories, now contending with inflexible non-human agents. As the density of the airspace increases, the number of potential conflicts also rises, leading to a possibly disastrous cascade effect that can fail even the most advanced tactical see-and-avoid algorithms. Any engineered solution that maintains safety in the airspace must satisfy both the computational requirements for effective airspace management as well as the political issue that human-pilots should maintain priority in the airspace. To this end, the research presented here expands on a concept of air traffic management called the Lane-Based Approach and describes a method for morphing the underlying spatial network to effectively deal with multiple potential conflicts. The spatial-network, which represents a model of the airspace occupied by autonomous aircraft, is mutated with respect to extrapolated human-piloted trajectories, leading to a real-world execution that modifies the trajectories of multiple vehicles at once. This reduces the number of pairwise deconfliction operations that must occur to maintain safe separation and reduces the possibility of a cascade effect. An experiment using real Automatic Dependent Surveillance-Broadcast (ADS-B) data, representing human-piloted aircraft trajectories, and simulated autonomous aircraft will demonstrate the proposed method.

David Sacharny, Thomas Henderson, Nicola Wernecke
On Formal Verification of Data-Driven Flight Awareness: Leveraging the Cramér-Rao Lower Bound of Stochastic Functional Time Series Models

This work investigates the application of the Cramér-Rao Lower Bound (CRLB) theorem, within the framework of Dynamic Data Driven Applications Systems (DDDAS), in view of the formal verificationof state estimates via stochastic Vector-dependent Functionally Pooled Auto-Regressive (VFP-AR) models. The VFP-AR model is identified via data obtained from wind tunnel experiments on a “fly-by-feel” wing structure under multiple flight states (i.e., angle of attack, velocity). The VFP-based CRLB of the state estimates is derived for each true flight state reflecting the state estimation capability of the model considering the data, model, and estimation assumptions. Apart from the CRLB obtained from pristine data and models, CRLBs are estimated using either artificially corrupted testing data and/or sub-optimal models. Comparisons are made between CRLB and state estimations from corrupted and pristine conditions. The verification of the obtained state estimates is mechanically verified the formal proof of the CRLB Theorem using Athena, which provides irrefutable guarantee of soundness as long as specified assumptions are followed. The results of the study indicate the potential of using a CRLB-based formal verification framework for state estimation via stochastic FP time series models.

Peiyuan Zhou, Saswata Paul, Airin Dutta, Carlos Varela, Fotis Kopsaftopoulos
Coupled Sensor Configuration and Path-Planning in a Multimodal Threat Field

A coupled path-planning and sensor configuration method is proposed. The path-planning objective is to minimize exposure to an unknown, spatially-varying, and temporally static scalar field called the threat field. The threat field is modeled as a weighted sum of several scalar fields, each representing a mode of threat. A heterogeneous sensor network takes noisy measurements of the threat field. Each sensor in the network observes one or more threat modes within a circular field of view (FoV). The sensors are configurable, i.e., parameters such as location and size of field of view can be changed. The measurement noise is assumed to be normally distributed with zero mean and a variance that monotonically increases with the size of the FoV, emulating the FoV v/s resolution trade-off in most sensors. Gaussian Process regression is used to estimate the threat field from these measurements. The main innovation of this work is that sensor configuration is performed by maximizing a so-called task-driven information gain (TDIG) metric, which quantifies uncertainty reduction in the cost of the planned path. Because the TDIG does not have any convenient structural properties, a surrogate function called the self-adaptive mutual information (SAMI) is considered. Sensor configuration based on the TDIG or SAMI introduces coupling with path-planning in accordance with the dynamic data-driven application systems paradigm. The benefit of this approach is that near-optimal plans are found with a relatively small number of measurements. In comparison to decoupled path-planning and sensor configuration based on traditional information-driven metrics, the proposed CSCP method results in near-optimal plans with fewer measurements.

Chase L. St. Laurent, Raghvendra V. Cowlagi

Main-Track Plenary Presentations - Space Systems

Frontmatter
Geometric Solution to Probabilistic Admissible Region Based Track Initialization

Probabilistic Admissible Region (PAR) is a technique to initialize the probability density function (pdf) of the states of a Resident Space Object (RSO). It combines apriori information about some of the orbital elements and a single partial-state observation to initialize the pdf of the RSO. A unified, geometrical solution to Probabilistic Admissible Region, G-PAR, is proposed. The proposed scheme gives a closed-form, clearly explainable solution for PAR particle mapping for the first time.It is shown that the G-PAR can be posed as a Bayesian measurement update of the very diffuse pdf of the states given by the postulated statistics. The effectiveness of the proposed G-PAR will be shown on diverse combinations of sensors and apriori knowledge. Its unique advantages in resolving the data association problem inherent in initializing the pdf of the objects when tracking multiple objects will also be presented.

Utkarsh Ranjan Mishra, Weston Faber, Suman Chakravorty, Islam Hussein, Benjamin Sunderland, Siamak Hesar
Radar Cross-Section Modeling of Space Debris

Space domain awareness (SDA) has become increasingly important as industry and society seek further interest in occupying space for surveillance, communication, and environmental services. To maintain safe launch and orbit-placement of future satellites, there is a need to reliably track the positions and trajectories of discarded launch designs that are debris objects orbiting Earth. In particular, debris with sizes on the order of 20 cm or smaller travelling at high speeds maintain enough energy to pierce and permanently damage current, functional satellites. To monitor debris, the Dynamic Data Driven Applications Systems (DDDAS) paradigm can enhance accuracy with object modeling and observational updates. This paper presents a theoretical analysis of modeling the radar returns of space debris as simulated signatures for comparison to real measurements. For radar modeling, when the incident radiation wavelength is comparable to the radius of the debris object, Mie scattering is dominant. Mie scattering describes situations where the radiation scatter propagates predominantly, i.e., contains the greatest power density, along the same direction as the incident wave. Mie scatter modeling is especially useful when tracking objects with forward scatter bistatic radar, as the transmitter, target, and receiver lie along the same geometrical trajectory. The Space Watch Observing Radar Debris Signatures (SWORDS) baseline method involves modeling the radar cross-sections (RCS) of space debris signatures in relation to the velocity and rotational motions of space debris. The results show the impact of the debris radii varying from 20 cm down to 1 cm when illuminated by radiation of comparable wavelength. The resulting scattering nominal mathematical relationships determine how debris size and motion affects the radar signature. The SWORDS method demonstrates that the RCS is proportional to linear size, and that the Doppler shift is predominantly influenced by translation motion.

Justin K. A. Henry, Ram M. Narayanan, Puneet Singla
High-Resolution Imaging Satellite Constellation

Large-scale Low-Earth Orbit (LEO) satellite constellations have wide applications in communications, surveillance, and remote sensing. Along with the success of the SpaceX Starlink system, multiple LEO satellite constellations have been planned and satellite communications has been considered a major component of the future sixth generation (6G) mobile communication systems. This paper presents a novel LEO satellite constellation imaging (LEOSCI) concept that the large number of satellites can be exploited to realize super high-resolution imaging of ground objects. Resolution of conventional satellite imaging is largely limited to around one meter. In contrast, the LEOSCI method can achieve imaging resolution well below a centimeter. We first present the new imaging principle and show that it should be augmented with a Dynamic Data Driven Applications Systems (DDDAS) design. Then, based on the practical Starlink satellite orbital and signal data, we conduct extensive simulations to demonstrate a high imaging resolution below one centimeter.

Xiaohua Li, Lhamo Dorje, Yezhan Wang, Yu Chen, Erika Ardiles-Cruz

Main-Track Plenary Presentations - Network Systems

Frontmatter
Reachability Analysis to Track Non-cooperative Satellite in Cislunar Regime

Space Domain Awareness (SDA) architectures must adapt to overcome the challenges present in cislunar space. Dynamical systems theory provides tools which may be leveraged to address some of the many challenges associated with cislunar space. The PSS is an analysis tool used to reduce dimensionality and help study the properties of the system flow. Invariant manifolds have been combined with the PSS to prescribe trajectories through various cislunar regimes by other researchers. In this work, the PSS and the invariant manifolds are used to pose a set of boundary value problems which define the $$\Delta \textbf{v}$$ Δ v from a nominal $$L_2$$ L 2 Lyapunov orbit through the PSS. By approximating the solutions through the PSS, the admissible controls onto these highways are approximated. One viable use for this formulation of a reduced reachable set will allow an SDA operator to intelligently task sensors to regain custody of a maneuver spacecraft. This paper examine uses concepts of a admissible region to intelligently reduce the reachability set for maneuver spacecraft and studies the efficacy for multiple maneuver windows and the affects of various user set parameters.

David Schwab, Roshan Eapen, Puneet Singla
Physics-Aware Machine Learning for Dynamic, Data-Driven Radar Target Recognition

Despite advances in Artificial Intelligence and Machine Learning (AI/ML) for automatic target recognition (ATR) using surveillance radar, there remain significant challenges to robust and accurate perception in operational environments. Physics-aware ML is an emerging field that strives to integrate physics-based models with data-driven deep learning (DL) to reap the benefits of both approaches. Physics-based models allow for the prediction of the expected radar return given any sensor position, observation angle and environmental scene. However, no model is perfect and the dynamic nature of the sensing environment ensures that there will always be some part of the signal that is unknown, which can be modeled as noise, bias or error uncertainty. Physics-aware machine learning combines the strengths of DL and physics-based modeling to optimize trade-offs between prior versus new knowledge, models vs. data, uncertainty, complexity, and computation time, for greater accuracy and robustness. This paper addresses the challenge of designing physics-aware synthetic data generation techniques for training deep models for ATR. In particular, physics-based methods for data synthesis, the limitations of current generative adversarial network (GAN)-based methods, new ways domain knowledge may be integrated for new GAN architectures and domain adaptation of signatures from different, but related sources of RF data, are presented. The use of a physics-aware loss term with a multi-branch GAN (MBGAN) resulted in a 9% improvement in classification accuracy over that attained with the use of real data alone, and a 6% improvement over that given using data generated by a Wasserstein GAN with gradient penalty. The implications for DL-based ATR in Dynamic Data-Driven Application Systems (DDDAS) due to fully-adaptive transmissions are discussed.

Sevgi Zubeyde Gurbuz
DDDAS for Optimized Design and Management of 5G and Beyond 5G (6G) Networks

The technologies vested by the introduction of fifth generation (5G) networks as well as the emerging 6G systems present opportunities for enhanced communication and computational capabilities that will advance many large-scale critical applications in the critical domains of manufacturing, extended reality, power generation and distribution, water, agriculture, transportation, healthcare, and defense and security, among many others. However, for these enhanced communication networks to take full effect, these networks, including wireless infrastructure, end-devices, edge/cloud servers, base stations, core network and satellite-based elements, should be equipped with real-time decision support capabilities, cognizant of multilevel and multimodal time-varying conditions, to enable self-sustainment of the networks and communications infrastructures, for optimal management and adaptive resource allocation with minimum possible intervention from operators. To meet the highly dynamic and extreme performance requirements of these heterogeneous multi-component, multilayer communication infrastructures on latency, data rate, reliability, and other user-defined metrics, these support methods will need to leverage the accuracy of full-scale models for multi-objective optimization, adaptive management, and control of time-varying and complex operations. This paper discusses how algorithmic, methodological, and instrumentation capabilities learned from Dynamic Data Driven Applications Systems (DDDAS)-based methodologies can be applied to enable optimized and resilient design and operational management of the complex and highly dynamic 5G/6G communication infrastructures. Such smart DDDAS capabilities are unswervingly proven for more than two decades on adaptive real-time control of various systems requiring the high accuracy of full-scale modeling for multi-objective real-time decision making with efficient computational resource utilization.

Nurcin Celik, Frederica Darema, Temitope Runsewe, Walid Saad, Abdurrahman Yavuz

Plenary Presentations - Systems Support Methods

Frontmatter
DDDAS-Based Learning for Edge Computing at 5G and Beyond 5G

The emerging and foreseen advancements in 5G and Beyond 5G (B5G) networking infrastructures enhance both communications capabilities and the flexibility of executing computational tasks in a distributed manner, including those encountered in edge computing (EC). Both, 5G & B5G and EC environments present complexities and dynamicity in their multi-level and multimodal infrastructures. DDDAS-based design and adaptive and optimized management of the respective 5G, B5G, and EC infrastructures are needed, to tackle the stochasticity inherent in these complex and dynamic systems, and to provide quality solutions for respective requirements. In fact, both emerging communication and computational technologies and infrastructure systems can benefit from their symbiotic relationship in optimizing their corresponding adaptive and optimized management. EC enabled by 5G (and future B5G) allows efficient distributed execution of computational tasks. DDDAS-based methods can support the adaptivity in bandwidth and energy efficiencies in 5G and B5G communications. On the other hand, EC has become a very attractive feature for critical infrastructure such as energy grids as it allows for secure and efficient real-time data processing. In order to fully exploit the advantages of EC, the communication network should be able to tackle the changing requirements related to the task management within edge servers. Thus, leveraging the DDDAS paradigm, we jointly optimize the scheduling and offloading of computational tasks in an EC-enabled microgrid considering both physical constraints of microgrid and network requirements. The results showcase the superiority of the proposed DDDAS-based approaches in terms of network utilization and operational efficiencies achieved, with microgrids as case example.

Temitope Runsewe, Abdurrahman Yavuz, Nurcin Celik, Walid Saad
Monitoring and Secure Communications for Small Modular Reactors

Autonomous, safe and reliable operations of Small Modular Reactors (SMR), and advanced reactors (AR) in general, emerge as distinct features of innovation flowing into the nuclear energy space. Digitalization brings to the fore of an array of promising benefits including, but not limited to, increased safety, higher overall efficiency of operations, longer SMR operating cycles and lower operation and maintenance (O&M) costs. On-line continuous surveillance of sensor readings can identify incipient problems, and act prognostically before process anomalies or even failures emerge. In principle, machine learning (ML) algorithms can anticipate key performance variables through self-made process models, based on sensor inputs or other self-made models of reactor processes, components and systems. However, any data obtained from sensors or through various ML models need to be securely transmitted under all possible conditions, including those of cyber-attacks. Quantum information processing offers promising solutions to these threats by establishing secure communications, due to unique properties of entanglement and superposition in quantum physics. More specifically, quantum key distribution (QKD) algorithms can be used to generate and transmit keys between the reactor and a remote user. In one of popular QKD communication protocols, BB84, the symmetric keys are paired with an advanced encryption standard (AES) protocol protecting the information. In this work, we use ML algorithms for time series forecasting of sensors installed in a liquid sodium experimental facility and examine through computer simulations the potential of secure real-time communication of monitoring information using the BB84 protocol.

Maria Pantopoulou, Stella Pantopoulou, Madeleine Roberts, Derek Kultgen, Lefteri Tsoukalas, Alexander Heifetz
Data Augmentation of High-Rate Dynamic Testing via a Physics-Informed GAN Approach

High-rate impact tests are essential in the prediction of component/system behavior subject to unplanned mechanical impacts which may lead to several damages. However, significant challenges exist when identifying damages in such events, considering its complexity, these diagnostic challenges inspire the employment of data driven approaches as feasible solution for such problems. However, most deep machine learning techniques require big amount of data to support an effective training to reach an accurate result, while the data collected from each test is extremely limited yet performing multiple tests to collect data is oftentimes unrealistically expensive. Therefore, data augmentation is very important to enhance the learning quality. Generative Adversarial Network (GAN) is a deep learning algorithm able to generate synthetic data under a recorded testing environment. However, a GAN uses random input as seeds to generate adversarial models, and with sufficient training, it may produce synthetic data with good quality. This paper proposes a hybrid approach which employs the output from an oversimplified FE model as the seed to drive a GAN generator, such that a drastic amount of computation is saved, and the GAN training will converge faster and more accurately than using just the random noise seeds. Variational Autoencoder (VAE) is combined with the approach to reduce the data dimension and the extracted features are classified via a Support Vector Machine (SVM). Results show that using the proposed physics-informed approach will improve the accuracy of the damage classifier and reduce the classification uncertainty, compared to using the original small dataset without augmentation.

Celso T. do Cabo, Mark Todisco, Zhu Mao
Unsupervised Wave Physics-Informed Representation Learning for Guided Wavefield Reconstruction

Ultrasonic guided waves enable us to monitor large regions of a structure at one time. Characterizing damage through reflection-based and tomography-based analysis or by extracting information from wavefields measured across the structure is a complex dynamic-data driven applications system (DDDAS). As part of the measurement system, guided waves are often measured with in situ piezoelectric sensors or wavefield imaging systems, such as a scanning laser doppler vibrometer. Adding sensors onto a structure is costly in terms of components, wiring, and processing and adds to the complexity of the DDDAS while sampling points with a laser doppler vibrometer requires substantial time since each spatial location is often averaged to minimize perturbations introduced by dynamic data. To reduce this burden, several approaches have been proposed to reconstruct full wavefields from a small amount of data. Many of these techniques are based on compressive sensing theory, which assumes the data is sparse in some domain. Among the existing methods, sparse wavenumber analysis achieves excellent reconstruction accuracy with a small amount of data (often 50 to 100 measurements) but assumes a simple geometry (e.g., a large plate) and assumes knowledge of the transmitter location. This is insufficient in many practical scenarios since most structures have many sources of reflection. Many other compressive sensing methods reconstruct wavefields from Fourier bases. These methods are geometry agnostic but require much more data (often more than 1000 measurements). This paper demonstrates a new DDDAS approach based on unsupervised wave physics-informed representation learning. Our method enables learning full wavefield representations of guided wave datasets. Unlike most compressive sensing methodologies that utilize sparsity in some domain, the approach we developed in our lab is based on injecting wave physics into a low rank minimization algorithm. Unlike many other learning algorithms, including deep learning methods, our approach has global convergence guarantees and the low rank minimizer enables us to predict wavefield behavior in unmeasured regions of the structure. The algorithm can also enforce the wave equation across space, time, or both dimensions simultaneously. Injecting physics also provides the algorithm tolerance to data perturbations. We demonstrate the performance of our algorithm with experimental wavefield data from a 1m by 1m region of an aluminum plate with a half-thickness notch in its center.

Joel B. Harley, Benjamin Haeffele, Harsha Vardhan Tetali
Passive Radio Frequency-Based 3D Indoor Positioning System via Ensemble Learning

Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers’ attention due to their low price, easy and customizable configuration, and non-invasive design. This paper proposes a PRF-based three-dimensional (3D) indoor positioning system (PIPS), which is able to use signals of opportunity (SoOP) for positioning and also capture a scenario signature. PIPS passively monitors SoOPs containing scenario signatures through a single receiver. Moreover, PIPS leverages the Dynamic Data Driven Applications System (DDDAS) framework to devise and customize the sampling frequency, enabling the system to use the most impacted frequency band as the rated frequency band. Various regression methods within three ensemble learning strategies are used to train and predict the receiver position. The PRF spectrum of 60 positions is collected in the experimental scenario, and three criteria are applied to evaluate the performance of PIPS. Experimental results show that the proposed PIPS possesses the advantages of high accuracy, configurability, and robustness.

Liangqi Yuan, Houlin Chen, Robert Ewing, Jia Li

Plenary Presentations - Deep Learning

Frontmatter
Deep Learning Approach for Data and Computing Efficient Situational Assessment and Awareness in Human Assistance and Disaster Response and Battlefield Damage Assessment Applications

The importance of situational assessment and awareness (SAA) becomes increasingly evident for Human Assistance and Disaster Response (HADR) and military operations. During natural disasters in populated regions, proper HADR efforts can only be planned and deployed effectively when the damage levels can be resolved in a timely manner. In today’s warfare, such as battlefield and critical region monitoring and surveillance, prompt and accurate battlefield damage assessments (BDA) are of crucial importance to gain control and ensure robust operating conditions in highly dangerous and contested environments. To design an effective HADR and BDA approach, this paper utilizes the Dynamic Data Driven Applications System (DDDAS) approach within the growing utilization of Deep Learning (DL). DL can leverage DDDAS for near-real-time (NRT) situations in which the original DL-trained model is updated from continuous learning through the effective labeling of SAA updates. To accomplish the NRT DL with DDDAS, an image-based pre- and post-conditional probability learning (IP2CL) is developed for HADR and BDA SAA. Equipped with the IP2CL, the matching pre- and post-disaster/action images are effectively encoded into one image that is then learned using DL approaches to determine the damage levels. Two scenarios of crucial importance for practical uses are examined: pixel-wise semantic segmentation and patch-based global damage classification. Results achieved by our methods in both scenarios demonstrate promising performances, showing that our IP2CL-based methods can effectively achieve data and computational efficiency and NRT updates, which is of utmost importance for HADR and BDA missions.

Jie Wei, Weicong Feng, Erik Blasch, Philip Morrone, Erika Ardiles-Cruz, Alex Aved
SpecAL: Towards Active Learning for Semantic Segmentation of Hyperspectral Imagery

We investigate active learning towards applied hyperspectral image analysis for semantic segmentation. Active learning stems from initially training on a limited data budget and then gradually querying for additional sets of labeled examples to enrich the overall data distribution and help neural networks increase their task performance. This approach works in favor of the remote sensing tasks, including hyperspectral imagery analysis, where labeling can be intensive and time-consuming as the sensor angle, configured parameters, and atmospheric conditions fluctuate.In this paper, we tackle active learning for semantic segmentation using the AeroRIT dataset on three fronts - data utilization, neural network design, and formulation of the cost function (also known as acquisition factor, uncertainty estimator). Specifically, we extend the batch ensembles method to semantic segmentation for creating efficient network ensembles to estimate the network’s uncertainty as the acquisition factor for querying new sets of images. Our approach reduces the data labeling requirement and achieves competitive performance on the AeroRIT dataset by using only 30% of the entire training data.

Aneesh Rangnekar, Emmett Ientilucci, Christopher Kanan, Matthew Hoffman
Multimodal IR and RF Based Sensor System for Real-Time Human Target Detection, Identification, and Geolocation

The Dynamic Data Driven Applications System (DDDAS) paradigm incorporates forward estimation with inverse modeling, augmented with contextual information. For cooperative infrared (IR) and radio-frequency (RF) based automatic target detection and recognition (ATR) systems, advantages of multimodal sensing and machine learning (ML) enhance real-time object detection and geolocation from an unmanned aerial vehicle (UAV). Using an RF subsystem, including the linear frequency modulated continuous wave (LFMCW) ranging radar and the smart antenna, line-of-sight (LOS) and non-line-of-sight (NLOS) friendly objects are detected and located. The IR subsystem detects and locates all human objects in a LOS scenario providing safety alerts to humans entering hazardous locations. By applying a ML-based object detection algorithm, i.e., the YOLO detector, which was specifically trained with IR images, the subsystem could detect humans that are 100 m away. Additionally, the DDDAS-inspired multimodal IR and RF (MIRRF) system discriminates LOS friendly and non-friendly objects. The whole MIRRF sensor system meets the size, weight, power, and cost (SWaP-C) requirement of being installed on the UAVs. Results of ground testing integrated with an all-terrain robot, the MIRRF sensor system demonstrated the capability of fast detection of humans, discrimination of friendly and non-friendly objects, and continuously tracked and geo-located the objects of interest.

Peng Cheng, Xinping Lin, Yunqi Zhang, Erik Blasch, Genshe Chen
Learning Interacting Dynamic Systems with Neural Ordinary Differential Equations

Interacting Dynamic Systems refer to a group of agents which interact with others in a complex and dynamic way. Modeling Interacting Dynamic Systems is a crucial topic with numerous applications, such as in time series forecasting and physical simulations. To accurately model these systems, it is necessary to learn the temporal and relational dimensions jointly. However, previous methods have struggled to learn the temporal dimension explicitly because they often overlook the physical properties of the system. Furthermore, they often ignore the distance information in the relational dimensions. To address these limitations, we propose a Dynamic Data Driven Application Systems (DDDAS) approach called Interacting System Ordinary Differential Equations (ISODE). Our approach leverages the latent space of Neural ODEs to model the temporal dimensions explicitly and incorporates the distance information in the relational dimensions. Moreover, we demonstrate how our approach can dynamically update an agent’s trajectory when obstacles are introduced, without requiring retraining. Our experimental studies reveal that our ISODE DDDAS approach outperforms existing methods in prediction accuracy. We also illustrate that our approach can dynamically adapt to changes in the environment by showing our agent can dynamically avoid obstacles. Overall, our approach provides a promising solution to modeling Interacting Dynamic Systems that can capture the temporal and relational dimensions accurately.

Song Wen, Hao Wang, Dimitris Metaxas
Relational Active Feature Elicitation for DDDAS

Dynamic Data Driven Applications Systems (DDDAS) utilize data augmentation for system performance. To enhance DDDAS systems with domain experts, there is a need for interactive and explainable active feature elicitation in relational domains in which a small subset of data is fully observed while the rest of the data is minimally observed. The goal is to identify the most informative set of entities for whom acquiring the relations would yield a more robust model. Assuming the presence of a human expert who can interactively score the relations, there is a need for an explainable model designed using the Feature Acquisition via Interaction in Relational domains (FAIR) algorithm. FAIR employs a relational tree-based distance metric to identify the most diverse set of relational examples (entities) to obtain more relational feature information for user refinement. The model that is learned iteratively is usable, interpretable, and explainable.

Nandini Ramanan, Phillip Odom, Erik Blasch, Kristian Kersting, Sriraam Natarajan
Explainable Human-in-the-Loop Dynamic Data-Driven Digital Twins

Digital Twins (DT) are essentially dynamic data-driven models that serve as real-time symbiotic “virtual replicas” of real-world systems. DT can leverage fundamentals of Dynamic Data-Driven Applications Systems (DDDAS) bidirectional symbiotic sensing feedback loops for its continuous updates. Sensing loops can consequently steer measurement, analysis and reconfiguration aimed at more accurate modelling and analysis in DT. The reconfiguration decisions can be autonomous or interactive, keeping human-in-the-loop. The trustworthiness of these decisions can be hindered by inadequate explainability of the rationale, and utility gained in implementing the decision for the given situation among alternatives. Additionally, different decision-making algorithms and models have varying complexity, quality and can result in different utility gained for the model. The inadequacy of explainability can limit the extent to which humans can evaluate the decisions, often leading to updates which are unfit for the given situation, erroneous, compromising the overall accuracy of the model. The novel contribution of this paper is an approach to harnessing explainability in human-in-the-loop DDDAS and DT systems, leveraging bidirectional symbiotic sensing feedback. The approach utilises interpretable machine learning and goal modelling to explainability, and considers trade-off analysis of utility gained. We use examples from smart warehousing to demonstrate the approach.

Nan Zhang, Rami Bahsoon, Nikos Tziritas, Georgios Theodoropoulos

Plenary Presentations - Tracking

Frontmatter
Transmission Censoring and Information Fusion for Communication-Efficient Distributed Nonlinear Filtering

A transmission censoring and information fusion approach is proposed for distributed nonlinear system state estimation in Dynamic Data Driven Applications Systems (DDDAS). In this approach, to conserve communication resources, based on the Jeffreys divergence between the prior and posterior probability density functions (PDFs) of the system state, only local posterior PDFs that are sufficiently different from their corresponding prior PDFs will be transmitted to a fusion center. To further reduce the communication cost, the local posterior PDFs are approximated by Gaussian mixtures, whose parameters are learned by an expectation-maximization algorithm. At the fusion center, the received PDFs will be fused via a generalized covariance intersection algorithm to obtain a global PDF. Numerical results for a multi-senor radar target tracking example are provided to demonstrate the effectiveness of the proposed censoring approach.

Ruixin Niu
Distributed Estimation of the Pelagic Scattering Layer Using a Buoyancy Controlled Robotic System

This paper formulates a strategy for Driftcam, an ocean-going robot system, to observe and track the motion of an ocean biological phenomenon called the pelagic scattering layer, which consists of organisms that migrate vertically in the water column once per day. Driftcam’s horizontal motion is determined by the flow field and the vertical motion is regulated by onboard buoyancy control. In order to observe the evolution of the scattering layer, an ensemble Kalman filter is applied to estimate organism density; the density dynamics are propagated using the Perron-Frobenius operator. Multiple Driftcam are subject to depth regulation by open-loop and closed-loop controllers; a control strategy is proposed to track the peak of the density. Numerical simulations illustrate the efficacy of this strategy and motivate ongoing and future efforts to design a coordination formation algorithm for multi-agent Driftcam system to track the motion of the scattering layer, with implications for ocean monitoring.

Cong Wei, Derek A. Paley
Towards a Data-Driven Bilinear Koopman Operator for Controlled Nonlinear Systems and Sensitivity Analysis

A Koopman operator is a linear operator that can describe the evolution of the dynamical states of any arbitrary uncontrolled dynamical system in a lifting space of infinite dimension. In practice, analysts consider a lifting space of finite dimension with a guarantee to gain accuracy on the state prediction as the order of the operator increases. For controlled systems, a bilinear description of the Koopman operator is necessary to account for the external input. Additionally, bilinear state-space model identification is of interest for two main reasons: some physical systems are inherently bilinear and bilinear models of high dimension can approximate a broad class of nonlinear systems. Nevertheless, no well-established technique for bilinear system identification is available yet, even less in the context of Koopman. This paper offers perspectives in identifying a bilinear Koopman operator from data only. Firstly, a bilinear Koopman operator is introduced using subspace identification methods for the accurate prediction of controlled nonlinear systems. Secondly, the method is employed for sensitivity analysis of nonlinear systems where it is desired to estimate the variation of a measured output given the deviation of a constitutive parameter of the system. The efficacy of the methods developed in this paper are demonstrated on two nonlinear systems of varying complexity.

Damien Guého, Puneet Singla

Main-Track Plenary Presentations - Security

Frontmatter
Tracking Dynamic Gaussian Density with a Theoretically Optimal Sliding Window Approach

Dynamic density estimation is ubiquitous in many applications, including computer vision and signal processing. One popular method to tackle this problem is the “sliding window” kernel density estimator. There exist various implementations of this method that use heuristically defined weight sequences for the observed data. The weight sequence, however, is a key aspect of the estimator affecting the tracking performance significantly. In this work, we study the exact mean integrated squared error (MISE) of “sliding window” Gaussian Kernel Density Estimators for evolving Gaussian densities. We provide a principled guide for choosing the optimal weight sequence by theoretically characterizing the exact MISE, which can be formulated as constrained quadratic programming. We present empirical evidence with synthetic datasets to show that our weighting scheme indeed improves the tracking performance compared to heuristic approaches.

Yinsong Wang, Yu Ding, Shahin Shahrampour
Dynamic Data-Driven Digital Twins for Blockchain Systems

In recent years, we have seen an increase in the adoption of blockchain-based systems in non-financial applications, looking to benefit from what the technology has to offer. Although many fields have managed to include blockchain in their core functionalities, the adoption of blockchain, in general, is constrained by the so-called trilemma trade-off between decentralization, scalability, and security. In our previous work, we have shown that using a digital twin for dynamically managing blockchain systems during runtime can be effective in managing the trilemma trade-off. Our Digital Twin leverages DDDAS feedback loop, which is responsible for getting the data from the system to the digital twin, conducting optimisation, and updating the physical system. This paper examines how leveraging DDDAS feedback loop can support the optimisation component of the trilemma benefiting from Reinforcement Learning agent and a simulation component to augment the quality of the learned model while reducing the computational overhead required for decision making.

Georgios Diamantopoulos, Nikos Tziritas, Rami Bahsoon, Georgios Theodoropoulos
Adversarial Forecasting Through Adversarial Risk Analysis Within a DDDAS Framework

Forecasting methods typically assume clean and legitimate data streams. However, adversaries’ manipulation of digital data streams could alter the performance of forecasting algorithms and impact decision quality. In order to address such challenges, we propose a dynamic data driven application systems (DDDAS) based decision making framework that includes an adversarial forecasting component. Our framework utilizes the adversarial risk analysis principles that allow considering incomplete information and uncertainty. It is demonstrated using a load forecasting example. We solve the adversary’s decision problem in which he poisons data to alter an auto regressive forecasting algorithm output, and discuss defender strategies addressing the attack impact.

Tahir Ekin, Roi Naveiro, Jose Manuel Camacho Rodriguez

Main-Track Plenary Presentations - Distributed Systems

Frontmatter
Power Grid Resilience: Data Gaps for Data-Driven Disruption Analysis

A resilient and reliable power grid is crucial for energy security, sustainability, and reducing service restoration time and economic burdens to society. Electric utility companies and power system regulatory bodies rely on metrics based on tracked data to assess the power grid’s reliability and plan for future needs. While reliability and resilience are often used interchangeably, their distinction should be recognized. Reliability assumes the power grid is operating under standard system conditions; resilience requires a disruption to occur to be evaluated and measured. Historically, reliability standards have been tracked with standardized metrics and enforced with great oversight to plan for predicted contingencies. However, power grid resilience has not been tracked and metrics remain elusive for utilities and regulatory authorities needing to ensure system performance in the changing context of power grid operations and development. In this paper, we evaluate existing power grid data sources (the Department of Energy’s Electric Emergency Incident and Disturbance Events form, DOE-417, a mandatory form for electric utilities) and identify data gaps that adversely impact data-driven resilience analysis. We consider different event types and their system-wide implications to evaluate how the missing data can impact power grid resilience assessments.

Maureen S. Golan, Javad Mohammadi, Erika Ardiles Cruz, David Ferris, Philip Morrone
Attack-Resilient Cyber-Physical System State Estimation for Smart Grid Digital Twin Design

Before implementing the microgrid testbed and SCADA electricity monitoring systems, computer aided tools can be used to design and validate technical specifications and performance. In this way, the system and product can be implemented digitally reducing cost, time, efforts, and visualizing expected quality. In real-time, designing and implementing the smart grid incorporating renewable microgrids is also a critical and challenging task due to random generation patterns of foreseeable green energy. In order to solve this impending problem, the microgrid digital twin incorporating renewable distributed energy resources is designed using physical and governing laws such as Kirchhoff’s laws, and input-output relationships. After modeling the distribution grid into a set of first-order differential equations, the microgrid digital framework is transformed into a compact state-space representation. Using a set of IoT sensors, the measurements are collected from the distribution grid at common coupling points. Indeed, the increased rate of cyber-attacks on the smart grid communication network requires for innovative solutions to ensure its resiliency and operations. When the IoT sensing information is under cyber attacks, designing the optimal smart grid state estimation algorithm that can tolerate false data injection attacks is a crucial task for energy management systems. To address aforementioned issue, this article had proposed a physics-informed based optimal grid state estimation. The simulation results have to be demonstrated the improved performance in grid state estimation accuracy, and computational efficiency compared to the traditional method. The availability of smart grid digital twin model can assist in monitoring the grid status which is precursor for controller design to regulate grid voltage at common coupling points.

M. Rana, S. Shetty, Alex Aved, Erika Ardiles Cruz, David Ferris, Philip Morrone
Applying DDDAS Principles for Realizing Optimized and Robust Deep Learning Models at the Edge

Edge computing is an attractive avenue to support low-latency applications including those that leverage deep learning (DL)-based model inferencing. Due to constraints on compute, storage and power at the edge, however, these DL models must be quantized to reduce their footprint while minimizing loss of accuracy. However, DL models and their quantized equivalents are often prone to adversarial attacks requiring them to be made robust against such attacks. The resource constraints at the edge, however, preclude any quantization and robustness design operations directly at the edge. Moreover, the changing dynamics of edge-based computations and resulting concept drifts in the models require an iterative approach to meet the needs of robust DL models at the edge. To address these challenges, this paper presents initial results on an iterative procedure involving a DDDAS feedback loop. DDDAS is used to dynamically instrument the edge-deployed, quantized DL models for data on the effectiveness of their quantization and robustness abilities, which in turn is used to drive an automated, cloud-based process that uses tools, such as Apache TVM, to generate quantized, optimized and robust DL models suitable for the edge. These models subsequently are automatically deployed at the edge using orchestration tools. Preliminary studies using this approach have shown its effectiveness in image classification and object detection applications.

Robert Canady, Xingyu Zhou, Yogesh Barve, Daniel Balasubramanian, Aniruddha Gokhale

Main-Track: Keynotes

Frontmatter
DDDAS2022 Keynotes - Overview

The DDDAS2022 Conference featured five keynote presentations, and an invited talk, which addressed important science and technology topics, and provided examples of advances in capabilities enabled or supported by DDDAS-based methods. The presentations covered a ange of areas such as: aerospace systems, cyber-security, bio-infomatics and genomics, and adverse environmental events. Together with the present overview, papers contributed by the keynote speakers are included in these proceedings. In addition, the keynotes’ slides are available at www.1dddas.org .

Frederica Darema, Erik Blasch
DDDAS for Systems Analytics in Applied Mechanics

This contribution is comprised of two parts. In the first part we provide an overview of the Dynamically Data-Driven Applications Systems (DDDAS) concept, with particular emphasis on the analytics of systems coming from the field of Applied Mechanics and focusing on the applications to aerospace structures. Aerospace composite materials and structures exhibit a strong multiscale behavior, which necessitates the development of a multiscale DDDAS framework wherein measurements and models interact at all the relevant spatial scales of the system of interest to maximize the resulting predictive power. We present a large-scale structural system example where the combination of dynamic data and advanced models are needed to be truly predictive. In the second part we examine the Neural Network (NN)-based data-driven approaches for systems analytics in applied mechanics, in particular, the Physics-Informed Neural Networks (PINNs) framework. The main idea of PINNs is to compensate for the lack of sufficient volume of measured data by forcing the system to obey the laws of physics expressed in the form of boundary-value problems (BVPs) based on partial differential equations (PDEs). A distinguishing feature of PINNs is that the discretization of a BVP does not make use of traditional methods, but rather NNs themselves. We focus on the ability of the approaches, incorporating NNs (as a tool) into DDDAS, to model large-deformation elastoplastic behavior of solids and structures so that they can be seamlessly integrated into structural systems analytics and beyond.

A. Korobenko, S. Niu, X. Deng, E. Zhang, V. Srivastava, Y. Bazilevs
Towards Continual Unsupervised Data Driven Adaptive Learning

Domain Adaptation (DA) techniques are important for overcoming the domain shift between the training dataset (called source domain) and the testing dataset (called target domain). Standard DA methods assume that the entire target domain is available during adaptation, but this assumption is often violated in practice. We consider DA in a data constrained scenario, where target data become available in small batches over time, and adaptation takes place continually. Hence, continual DA is a framework to instantiate the Dynamic Data Driven Applications Systems (DDDAS) paradigm, wherein a model is developed from the data available to discern the relevant features, and subsequently when the model is deployed, it needs to be adapted (i.e., through a learning process) from the new real-world data. We discuss a novel source-free method for Continual Domain Adaptation (ConDA) that utilizes a buffer for selective replay of previously seen samples. In our unsupervised adaptation framework, we selectively mix samples from incoming batches with data stored in a buffer and use them to adapt our model as new batches are received. Our results using ConDA demonstrate the benefits of our framework when operating in data constrained environments.

Andeas Savakis

Main-Track: Wildfires Panel

Frontmatter
Overview of DDDAS2022 Panel on Wildfires

The DDDAS2022 Conference featured a Panel on Wildfires, which addressed DDDAS-based modeling and instrumentation methods for the detection, prediction of the onset and propagation of wildfires, smoke generation and spread, as well as utilizing such information for containment of the wildfire; the DDDAS-based wildfire methods also support other emergency response actions, infrastructure safety and evacuation of humans out of the fire and smoke harm’s way. Together with the present overview, papers contributed by the panelists are included in this part of the proceedings. In addition, the keynotes’ slides and presentations recordings are available at www.1dddas.org .

Frederica Darema
Using Dynamic Data Driven Cyberinfrastructure for Next Generation Disaster Intelligence

Wildfires and related disasters are increasing globally, making highly destructive megafires a part of our lives more frequently. A common observation across these large events is that fire behavior is changing, making applied datadriven fire research more important and time critical. Significant improvements towards modeling wildland fires and the dynamics of fire related environmental hazards and socio-economic impacts can be made through intelligent integration of modern data and computing technologies with techniques for data management, machine learning and artificial intelligence. However, there are many challenges and opportunities in integration of the scientific discoveries and datadriven methods for hazards with the advances in technology and computing in a way that provides and enables different modalities of sensing and computing. The WIFIRE cyberinfrastructure took the first steps to tackle this problem with a goal to create an integrated infrastructure, data and visualization services, and workflows for wildfire mitigation, monitoring, simulation, and response. Today, WIFIRE provides an end-to-end management infrastructure from the data sensing and collection to artificial intelligence and dynamic data-driven modeling efforts using a continuum of computing methods that integrate edge, cloud, and high-performance computing. Through this cyberinfrastructure, the WIFIRE project provides data driven knowledge for a wide range of public and private sector users, enabling scientific, municipal, and educational use. This paper summarizes the talk reviewing our recent work on building this dynamic data driven cyberinfrastructure and impactful application solution architectures that showcase integration of a variety of existing technologies and collaborative expertise.

Ilkay Altintas
Autonomous Unmanned Aerial Vehicle Systems in Wildfire Detection and Management-Challenges and Opportunities

Wildfires are one of the costliest and deadliest natural disasters in the United States, particularly in the Western USA. Wildfire frequency and severity are increasing due to climate change and urban sprawl. Detecting forest fires at early stages enhances the chance of in-time intervention and efficient fire management and evacuation strategies. Forest fires are commonly detected by sensor systems that are not widely available across the nation or using satellite images that offer global coverage but suffer from low temporal and spatial resolution, resulting in missing the forest fires in the early stages. Unmanned aerial systems (UAS) have been recently utilized for this purpose, noting their accessibility and low deployment cost. In this paper, we present some recent advances in drone-based fire detection and discuss current challenges toward a wide-scale deployment.

Fatemeh Afghah
Role of Autonomous Unmanned Aerial Systems in Prescribed Burn Projects

This short paper describes the potential for use of autonomous multi-UAS teams during all stages of a prescribed burn. The current state of practice of prescribed burning is labor intensive and based on numerous simplifying assumptions. UAS teams promise to increase efficiency and effectiveness, while creating the opportunity to develop new science related to fire behavior. Ingrained in the proposed UAS mission profiles is bi-directional feedback between sensing and computational components at multiple timescales, which is a hallmark of the Dynamic Data Driven Applications Systems (DDDAS) framework.

Mrinal Kumar, Roger Williams, Amit Sanyal
Towards a Dynamic Data Driven Wildfire Digital Twin (WDT): Impacts on Deforestation, Air Quality and Cardiopulmonary Disease

Recent persistent droughts and extreme heatwave events over the Western states of the US and Canada are creating highly favorable conditions for mega wildfires. The International Program of Climate Change AR6 report suggests that such extreme events will continue occurring with increasing frequency and intensity over forested regions, globally. While humangenerated fires for farming in the Amazon are at a potential tipping point, wildfires in the Northern Hemisphere are comparably generating broad regions of deforestation. The smoke from recent mega wildfires in California, driven by atmospheric and fuel conditions controlling their intensity, has been observed to penetrate the planetary boundary layer, stay in the atmosphere for a long time, and travel long distances. The wildfire smoke from such events has the potential to reach distant cities and towns over the Eastern US, significantly reducing the air quality of these distant communities, and to adversely impact human health by increasing Covid-19 morbidity as well as the number of respiratory and smoke-related heart diseases.In this paper, we will apply the concepts of a dynamical data-driven wildfire system to implement a real-time Wildfire Digital Twin (WDT) simulation at sub-km resolution to enable the study of mega wildfire smoke impact scenarios at various distant locations from the occurring wildfires over western N. America. WDT provides a valuable planning tool to implement parameter impact scenarios by season, location, intensity, and atmospheric state. We augment the NASA Unified WRF (NUWRF) model with a dynamic fire spread parameterization (SFIRE) coupled to GOCART, CHEM, and HRRR5 physics. We implement a data-driven, near-time continuous assimilation scheme for ingesting and assimilating observations from the NOAA satellite instruments, VIIRS, and ABI and from a streaming sensor web of radars, ceilometers, and satellite lidar observational systems into the nested regional NUWRF model. We accelerate the high-resolution nested NUWRF model performance to make it suitable for forecasting applications by emulating the WRF microphysics and GOCART parameterizations with a deep dense transform machine learning neural net architecture, FourCastNet, that can maintain a simulated hourly atmospheric forecast in seconds. The WDT can also model the development of data-driven smoke from plumes and track the smoke across the US as it penetrates the planetary boundary layer, subsequently increasing the surface PM2.5. The SFIRE model spread and plume interaction with the atmosphere is a unique contribution by the WDT, fully enabling the interaction of smoke aerosols with observed clouds, the microphysics precipitation, convection, and the GOCART Chem, currently unavailable in other fire and smoke forecasting models.

M. Halem, A. K. Kochanski, J. Mandel, J. Sleeman, B. Demoz, A. Bargteil, S. Patil, S.Shivadekar, A. Iorga, J. Dorband, J. Mackinnon, S. Chiao, Z. Yang, Ya. Yesha, J. Sorkin, E. Kalnay, S. Safa, C. Da
An Earth System Digital Twin for Air Quality Analysis

Severe drought and wildfires have become grim indicators of the severity of the weather and climate extremes our planet is increasingly facing. The latest Intergovernmental Panel on Climate Change (IPCC) report describes the unprecedented rate at which the global climate has warmed in the last 200 years (UN News 2021), resulting in increasing ocean temperature, rising sea levels, intensifying rains and floods, new records for heatwaves and droughts, and ever-growing stress on freshwater availability. The growing collections of multi-discipline, high resolution spatiotemporal data requires us to be smarter and more automated, about scalable analytic framework and what data to incorporate into an analysis. This paper presents an open-source Earth System Digital Twin (ESDT) architecture being developed at the NASA’s Jet Propulsion Laboratory (JPL) for integrated air quality analysis during wildfires.

Thomas Huang, Sina Hasheminassab, Olga Kalashnikova, Kyo Lee, Joe Roberts

Workshop on Climate, Life, Earth, Planets

Frontmatter
Dynamic Data-Driven Downscaling to Quantify Extreme Rainfall and Flood Loss Risk

The adverse socio-economic effects of natural hazards will likely worsen under climate change. Modeling their risk is essential to developing effective adaptation and mitigation strategies. However, climate models typically do not resolve the detailed information that risk quantification demands. Here, we propose a dynamic data-driven approach to estimate extreme rainfall-induced flood-loss risk. In this approach, coarse-resolution climate model outputs ( $$0.25^{\circ } \times 0.25^{\circ }$$ 0 . 25 ∘ × 0 . 25 ∘ ) are downscaled to high-resolution ( $$0.01^{\circ } \times 0.01^{\circ }$$ 0 . 01 ∘ × 0 . 01 ∘ ) rainfall. After that, rainfall, historical insurance loss provided by The Federal Emergency Management Agency (FEMA), and other geographic data train a flood-loss model. Our approach shows promise for quantifying flood-loss risk, showing a weighted average value of $$R^2 = 0.917$$ R 2 = 0.917 for Cook County, Illinois, USA.

Anamitra Saha, Joaquin Salas, Sai Ravela
Backmatter
Metadaten
Titel
Dynamic Data Driven Applications Systems
herausgegeben von
Erik Blasch
Frederica Darema
Alex Aved
Copyright-Jahr
2024
Electronic ISBN
978-3-031-52670-1
Print ISBN
978-3-031-52669-5
DOI
https://doi.org/10.1007/978-3-031-52670-1

Premium Partner