MENY

Aojie Hong: The simulations are still running. Wait and be patient!

I am waiting and trying to be patient when I am writing this blog. To do something else can help me move my eyes from staring at the green progress bar. It is «terrible» that I have to wait days until I can get the results I want.

PhD student Aojie Hong, patiently waiting during simulations. " Maybe I should have bought a million dollar supercomputer for my PhD."

If you are planning to kill some time like what I am doing now, please continue reading my story.

Many reservoir engineers tend to obtain data as much as possible to calibrate their reservoir simulation models. They believe that the more calibrations they perform, the more accurate the model is, and the lower the uncertainty is. However, in the decision analysis community, people argue that there is no value in uncertainty reduction itself and the value of obtaining more data to further reduce uncertainty must be related to a concrete decision making context. The decision analysis community uses a concept called the Value-of-Information (VOI) to quantify the value created from data.

I have spent several months in figuring out how to assess the VOI in a reservoir simulation model calibration context. Although it is theoretically doable, there is a serious practical issue – it is very computational consuming to calculate the VOI when reservoir simulations are involved. It is because, in order to calculate the VOI, I have to first run simulations to obtain the possible observation data, then calibrate the reservoir simulation models to all the possible data, and finally perform production optimization over all the calibrated models. The whole process needs thousands of reservoir simulation runs. Even though I am using a simple 2D model which requires only seconds to run and a workstation which is more powerful than a regular PC, the total computational time is 2,000 minutes = 33 hours = 1.4 days. What is “worse” is that I am not done with only one VOI calculation, but several calculations for different types of data! It is essential to find a way to shorten the computational time. Maybe I should have bought a million dollar supercomputer for my PhD.

The simulations are still running. Wait and be patient!