Research

I spend a significant portion of my time on research in data-informed soil mechanics. This is often multi-disciplinary, by combining computational physics, machine learning, statistics and soil mechanics. Most of my current work is targetted at improving the predictions for the engineering department at DEME. My published research can be found at researchgate.

Geotechnical data structures, cleaning and exposing

Data engineering is a major challenge in any effort to valorize the available data in a bussiness. This is certainly the case for civil and geotechnical engineering. One major factor is that the sector is relatively slow to adopt new technologies. The data is often not available in a structured way, and the data is often not clean. A significant part of my work is to set-up an ingestion and cleaning pipeline of geotechnical data, and to create a scaleable and uniform data structure in a geospatial database, ready to share on a platform and use for analysis and predictions.

The challenge starts at the point of data acquisition. Geotechnical data is often very heterogenous and variable over projects and scopes. There is a wide (and growing) range of soil investigation techniques, both in the field and lab, very valuable monitoring data and an increasing amount of operational data logged during execution of a project. In addition, there is a lot of context that can be very relevant to interpret data and model the conditions correctly, in other words, a lot of valuable metadata. Significant efforts have been made to set-up a unified structure for soil investigation results with the AGS data format but in practice we observe that the format is often not used properly, or at all. Still, the format is a major step forward and improving further, including scripts to process the data and do quality checks. However, this only covers part of the data sources. We are working on ways to ingest and map all high-value geotechnical data and set-up automated quality control and orchestration on the cleaning process.

The final goal is to create a geotechnical dataplatform that enables advanced analytics and better geotechnical modelling and prediction. In terms of data engineering, this means not only a good ingesting and cleaning pipeline, but also user-friendly and reproducible tools for engineers. I am currently working with containerised solutions in web applications to lower the barrier to entry for civil and geotechnical engineers.

DevOps

Building models and data processing tools in code, moving away from ad-hoc excel-based work requires more than only good coding. You also need proper governance of the development, version control, CI/CD pipelines, modular, reusable code in repositories and a deployment strategy, in short DevOps processes. I learned this the hard way, as I started out with ad-hoc Jupyter notebooks that I had to migrate along the way, with some unnecessary cleaning and re-organisation. A great inspiration for me is The Phoenix Project and its associated resource, The DevOps Handbook.

Visco-plasticity and large strain modelling

I spent most of my time durin the last decade working on predictions of the behaviour of land reclamations realised with soft soils. Reusing soft soils can contribute significantly to the sustainability of the newly created land and reduce the overall cost of the project by more efficient use of the dredging efforts. On the other hand, building on soft soils comes with challenges related to stability and safe work, long-term changes in the soil due to slow consolidation and higher uncertainty on the long-term performance of the land. Some of these challenges were the main focus of my PhD research: modelling the soil from a very low density slurry to a improved soil with better creep properties.

Large strain modelling

One technique to reclaim land is by hydraulically dredging and pumping soft soils into a containment. The soil will be fully desintegrated and mixed with water to a very low density. Over time the soil particles will sediment, quickly hindered by other particles up to the point that particles come fully into contact and further densify by outgoing water flows; i.e. consolidation. Very large deformations occur at this stage. To model this process numerically, large-strain considerations should be considered and specific lab tests should be conducted. In my research I set-up a large 1D compression test to evaluate the viscoplastic responses at the early stages of consolidation. The viscous behaviour at that stage is also observed to be significantly larger then at higher densities. By combining viscoplasticity with large-strain consolidation, more reliable modelling of slurry in laboratory conditions were generated.

To create reliable predictions in the field, some additional features to the simulations are necessary. In lab conditions, we typically try to control the total mass of soil up front to evaluate how it responds to the different stresses. In the field, the soil mass changes all the time, as the system is not fully closed during dredging and reclamation. A soil-water mixture enters the system and consolidation water is drained off. A reliable model needs to consider these changes by updating of the mesh over time. To this end, I developed an extension of the large strain model that considers these conditions and is amenable to calibration with field monitoring data.

Long term creep after preloading

Once the soft soil consolidates sufficiently, and the soil is loaded and, after consolidation, unloaded, the soil swells. The swelling has a primary component due to changes in effective stress (i.e. an elastic response) and subsequently a viscous swelling under constant effective stress. After a period of time creep (i.e. viscous compression) reappears, depending on the amount of unloading relative to the initial load (the overconsolidation factor). Most models focus on the elasto-plastic behaviour and on the long-term creep, while viscous swelling is often ignored. Some models enable viscous swelling, where viscous swelling and viscous compression are mutually exclusive under a given stress state. I have developed a new model that allows the simulation of subsequent swelling and creep after preloading. The main features of the model are:

Probabilistic modelling

Traditionally, geotechnical design is done in a deterministic fashion. Starting from a set of characteristic design parameters, the geotechnical design is then performed by iteratively simulating the soil response, with application of (partial or global) safety factors. There is a strong trend towards consideration of the soil variability.

The goal is to evolve to a probabilistic approach in soil modelling, where the design is performed by sampling from the probability distribution of the soil characteristics. This starts with mapping of all the uncertainties. While that is relatively straightforward for some measured variables, quantifying the (significant) uncertainty of the model itself and the reliability of empirical parameters is an ongoing challenge. Ideally we would be able to quantify the uncertainty of the model itself for a specific geology and application.

The fields that I mostly work on are:

Improving physics-based models through feedback

In civil, geotechnical and marine engineering works, there is great potential to collect operational logging data, combine it with site investigations and monitoring and validate and improve predictive models. It is helpful to think of this in a Bayesian framework. We can generate a priori predictions of a certain physical process based on the initial data available (boundary conditions, some site investigation, a priori assumptions, etc.), possibly improve this prediction by updating with additional testing and finally create a posterior prediction with feedback measurements.

This process can inform us on the informative value of the available data and the reliability of the model. With sufficient feedback data, we can iterate over models, adding more flexibility to create a better fit. There is a continuum between white-box models with clear rules (i.e. encoding physical laws) and black-box models that are fully data-driven. For geotechnical problems, I believe most success can be achieved with by working on data-informed, physics-based models that are continuously updated / improved by data, while mainting a steady physical basis. This kind of hybrid modelling helps to avoid overfitting of the model and extend its applicability beyond the trained feature space.

One way to achieve gradually increasing data-driven modelling is to start from a set of known physical relations and to then reduce some restrictions that are incompatible with the feedback data. For example, to model the viscoplastic unloading response of soil, you can start of from well known hydro-mechanical relations and the elastic behaviour and then add viscous components that are calibrated by Bayesian inference. Physics-informed neural networks (PINNs) take this one step further, leaving the physical relations in place and approximating the rest with neural networks as universal function approximators.

Back to Top