Cockrell School of Engineering
The University of Texas at Austin


Associate Professor Michael Pyrcz

Challenge: In the energy industry we have been working with “big data” for a long time. Our seismic surveys rival the volumes of data acquisition by Google and NASA, but they are highly variable over time and space. Also, we have a wide variety of well, seismic, and production-based subsurface measures spanning many scales and each are limited in veracity and coverage. The question we face is: how do we use all this data to support billion dollar investment decisions?

Solutions: In the presence of big data, the following approaches are essential for the best possible integration to support development decision-making. (1) We must retain geoscience and engineering expert knowledge in all steps of subsurface modeling, and avoid the temptation of purely data-driven methods. (2) We should use established methods to “de-bias” and “impute missing data” to maximize the value of our available data. (3) We must use robust spatial statistical methods to integrate information from all data sources accounting for data location, scale, and accuracy, because the spatial context matters. (4) With appropriate geoscience and engineering context, we may leverage novel statistical learning or machine learning methods to infer and predict salient subsurface features and responses.

Professor Eric van Oort

Challenge: The question for oil and gas companies is whether or not they want to embrace big and messy data analytics, which is radically changing the industry, or ignore it at the risk of them being left behind and outcompeted by companies that champion the sophisticated use of data in all aspects of their business. There are excellent opportunities to use data to improve the time, cost, profitability, and safety of well construction. There will be a clear shift in required skill sets for petroleum engineers, who in the future will need data analysis and programming skills to effectively compete for industry jobs.      

Solutions: To help oil and gas companies in their goal to get (more) value from data, there needs to be facilitation of easy access to information effectively extracted from all types and sources of data (static vs. dynamic, structured and unstructured, old vs. new sensors, surface vs. downhole sensors, etc.). We can look to other industries and borrow tools such as spider bots, natural language recognition, machine learning and artificial intelligence to make sense of the data and turn it into valuable information. This information then needs to be visualized and made easily digestible, such that companies can use it in their business control processes to help reduce costs and achieve a completely safe work environment for their employees.

Academia has an important role to play in this facilitation, with the added benefits that (1) there are great R&D discoveries to be made in analyzing industry data thoroughly; (2) data analysis allows academia to teach and train students in an entirely novel and superior way, while at the same time providing them with the essential skills needed for their future data-centric careers.