Ironically, in an interview planned to discuss advanced mining technologies, technology was not on our side when I called the DataCloud team in early September.
Director of software solutions, Steve Putt, was on hand to answer my questions, albeit offline thanks to an unscheduled summer snow fall and power outage in the Denver area that had brought down his Wi-Fi connection, and I was having issues with the microphone on my laptop. But, despite the off-putting echo and no video link, we soldiered on.
Mine optimisation waits for no man… or woman for that matter.

We began the conversation by examining challenges in data collection.
“The IoT has slowly been coming up, but the big challenge has been the ruggedness of sensors,” Putt explained. “At mines they often get broken or end up full of dust, or they vibrate apart. There are some blast sensors, for instance, that are fully encased in metal and they’re extremely rugged but they’re also very expensive. Something interesting that we’re seeing now is the use of cheap, disposable sensors.
“In any case, there’s more data now from areas that we did not previously have data from. It’s valuable, but it’s also adding to the problem of an industry that’s already overwhelmed with data and is not that well equipped to handle it.
“I’m a mining engineer. My training included two or three classes that used Microsoft Excel, more to manage the business side of things. It had nothing to do with databases or equipment data. If that’s the case, what do you do with billions of data points from multiple sources?”
It’s a good question. Even for skilled data engineers, the question of how to handle terabytes of data, and not just manage it but learn something from it, then take action in a timely fashion, is a huge challenge.
And this will become even more important as the industry moves from optimisation at an operational level to an enterprise level. How can you use data from multiple mines to drive predictability and performance across an entire commodity portfolio?
Evolving mining education
There are two ways to look at this predicament: you can’t manage what you can’t measure but, at the same time, if you can’t manage the data that comes from those measurements then you’re caught between a rock and hard place.
Which is precisely why the mining industry, together with academia, needs to up its game in educating engineers.
“You’d think that education would be changing, but it’s not moving very quickly,” Putt said. “It’s pretty clear that mining engineers, or even civil engineers, are not going to be able to do as good of a job with that data as somebody with a specific skill in data science. So that’s where AI – machine learning – is starting to add value.”
Good data in, useful information out
Is the data that we’re gathering today good enough, not just for our current needs but for future needs as well? I asked Putt.
“From what we’ve seen, the data is adequate,” he said. “The older stuff, maybe not. We deal with a lot of production drilling data. There’s always some junk… sometimes GPS goes out so you get points that are not even anywhere close to where they should be, or the depths wrong, but those anomalies are pretty obvious. And that’s maybe 2% of the data.
“It’s usually just a case of calibrating or cleaning the data and then you’ve got a pretty good data source, and that seems to be true for the mill data, fleet management and truck-shovel data…
“Right now, there’s so much of it, even if we have to filter out 10%, it’s still pretty good. Using that, you’re still going to make much better decisions than somebody who just went out and looked at the pit for a couple of hours.
“I think the focus now should be reviewing what data sources we have, cleaning the data and contextualising it; which is definitely the hardest part. Also looking at what data you need to make a specific decision and ignoring the rest.
“Getting to that point is a lot of work and it requires a special team of people.
“Once we’re at the point where everything has been analysed and reviewed, then we can start going down the path of your question, which is: do we need more sensors or additional hardware? Those types of things.”

Make the most of machine learning
Aside from bridging the skills gap and helping to manage the tidal wave of data, machine learning is also be applied to identify optimisation opportunities within the mine value chain.
“When it comes to optimisation, there are tonnes of different variables from each operation that need to be considered,” explained Putt. “Drill and blast will have millions of data points, fleet management will have data based on the truck and excavator performance, and then there’s plant data on top of that.
“Traditionally, performance in these areas was looked at in daily or weekly averages. For instance, the daily average of the material blend going to the mill. That then allows you to look at historical averages.
“But, if you really want to dig into the intricacies of why the mill had a hiccup for an hour, it becomes very challenging for someone to grab all that data, process it, put it into one source and then find a pattern that relates to the drilling, the trucks, or maybe even the weather.
“No person can do that quickly enough for the outcomes to be useful, so that’s where machine learning tools come in. They basically allow us to do math a lot faster with a little bit of set up work.”
Traditional computing in this sense would involve a person running equations and trying to find an average or regression line, but machine learning tools allows operations to take their data, add some historical context regarding the impact each variable has had on, let’s say, the mill.
They can train the models using that information and then let them run autonomously and generate answers.
“Then, when you stack that information with data from blasting and digging, it takes that insight to a new level,” Putt explained. “That’s just not possible without machine learning.”
Isn’t machine learning just a subset of artificial intelligence, I asked?
“I tend to shy away from the term ‘AI’, because that implies that the machine has come up with a new insight on its own and I don’t think that is happening today,” Putt was clear. “What we’re doing in mining is training computers with some inputs and letting them decide, based on more inputs, what the missing variables are and predict future patterns in performance based on what we know.”
Leverage the tools available
Fortunately, today, there are plenty of pre-programmed tools available for use in mine optimisation.
“You can just grab the code, train the models and you’re away,” said Putt. “It’s getting a lot easier as people create better tools.
“For the most part, we’re taking tools from other industries and modifying them to work in geology or geophysics. Adding, say, production drilling data to a tool that was maybe built for exploration drilling data.
“It’s pretty cool how we can do that. We’ve used systems from signal processing, like noise cancellation, and applied them to mills. We use the same kind of math and tweak it a little to detect the signature before and after a specific rock type has gone through the mill, for example.
“My university professor always said: ‘the lazy way is usually the better way’ and he’s right. Don’t try to redo stuff that already works well. Modify existing tools to do what you need.”
I wouldn’t call that lazy, I’d call that smart! I noted.

Integrating data, integrating teams
What possibilities can simulation and modelling offer in mine design and planning? I wanted to know. Is it possible to model the full mine-to-mill value chain in real time at present?
“That’s exactly what we’re trying to do, and we’re close,” said Putt. “The one thing that’s missing is definitive information about the orebody. Unfortunately, you can’t just Google what’s in the rock. That’s why blasthole data is so useful because it fills that gap for us.
“By doing chemical signature processing on blasthole data to get things like gold grades we can create a digital model of the rock before we dig and process it. It’s not quite real time, but we can get that information pretty much within the hour.
“A lot of work goes into cleaning and getting the datasets ready to be merged. The biggest challenge is that a drill designed by, say, Epiroc was not built to be integrated with a Caterpillar hauling fleet. And the mill was made by a completely different company. The machines all have different sensors and they weren’t designed to work together.
“It would be nice, and I don’t like using this example, but if we had an Apple of mining that just had all the data ready to be used by anybody. I think that’s the race right now. It will require a lot of money and time, but it’s certainly possible.”
Enabling collaborative working
The value in these capabilities lies in getting mines to move away from siloed ways of working.
“Across mines, all the different departments are working to maximise production at the lowest cost, and you can’t do that effectively until everybody’s looking at the same information, at the same time with the same goal,” explained Putt.
“Once everyone’s working together in making the mine better and they have the information and tools to do so, that’s really where we need to be and we are not there yet!”
To address this, DataCloud has developed a platform called MinePortal. The software pulls in datasets from the drill to the mill to identify bottlenecks and build a digital twin of production. It uses 3D modelling technology to allow mines to zoom in (or out) on specific areas of the operation and analyse performance holistically.
“It’s available in the cloud,” Putt explained. “You can just log into a Chrome browser and all the data, all of the models are there ready. If there are updates, they come automatically. It’s just a quick, easy way to view all of the data in one place and to share it with colleagues.
So, it’s an environment for collaborative working then? I asked.
“Right.” said Putt. “If you can make a rapid prototype, you can improve faster and this is, essentially, a way to rapid prototype a mine.”
A new way of mining
I asked Putt where he sees mine modelling heading?
“Automation is going to get even more relevant,” he said. “And that will improve a lot of things, including the quality of the data we get from mining operations. With remote operations, really anyone can become a mining operator; it’s almost like a video game at that point.

“And when you automate equipment, it requires more sensors… better sensors. So again, the data generated is much cleaner and there’s more of it. That could add to the overwhelm, but I actually think it’s going to become easier to manage as we employ more data engineers and scientists from different backgrounds.
“Hopefully, it will open up mining to people who never would have thought about a career in this sector. I mean, who wouldn’t want to run a haul truck or an excavator from their kitchen?”
That would indeed bring a whole new meaning to working from home.
Putt left me with some food for thought…
“Now that companies like Amazon, Google and Microsoft are all jumping into the game because of the data processing opportunities, I think there are going to be a lot of interesting changes,” he said.
“Let’s say, for example, Google wants to start a mine. You have to wonder how that mine would operate. Or, if Elon Musk wants to disrupt the mining industry because he’s sick of lithium supply disruptions, what would that mine look like?
“It probably wouldn’t look like the way mines do today.”
This is an interesting article. Data mining is becoming more important than mining itself. The convergence of technologies will make mining more interesting in the sense we will now have ML and AI based system that will enable decision making. However the fundamentals of geology, rock mechanics, mineral beneficiation, geophysics etc. cannot be replaced. However, I wonder if the knowledge in these fundamental subjects will loose their significance in future to the new age knowledge around data mining and data management