IIoT / Big Data Analytics

Simple data analytics, better beer: Cheers!

In this Big Picture Interview, Deschutes discovers that you can get actionable info without analyzing every bit of data.

By Christine LaFave Grace, managing editor

Tim Alexander is brewery operations technology manager at Deschutes Brewery in Bend, OR. After nabbing a couple of engineering degrees, he joined Deschutes as an intern in 2006 and never left. In 2014, Deschutes recognized that it was missing some of its fermentation points – acting too soon or too late in moving brews to the next step in the fermentation process – which can waste time and affect beer quality. The brewer turned to OSIsoft’s PI Integrator for Microsoft Azure to try to predict fermentation more precisely. Alexander shared with managing editor Christine LaFave Grace what the brewer has learned.

PS: In your presentation at the ARC Industry Forum in February, you said that one of the key lessons you’ve taken from Deschutes’ foray into data analytics is, “Don’t overcomplicate things.” Can you elaborate?

TA: When we first went into this, we were all really excited – OSIsoft, Microsoft, and us. Big Data, right? (It was) “We’re going to do something really cool here; we’re going to be able to model our fermentation perfectly.” So we started sending all of our temperatures for all of the tanks up into the cloud, all of the pressures on the tanks that we had pressure (monitoring) on, all of our outputs for temperature and pressure ... We thought, why not? Let’s just send it all up there. At one point it was going to be multivariate analysis on all of these things, (but) the Microsoft data scientists were starting to look at the data, and they said, OK, what are definitely the most important things (to look at)? We said, we think maybe the vessel could have an effect (on fermentation), and brand absolutely, because different brands ferment in different ways, and then we’re trying to predict a density curve, so the density measurement.

They started looking at everything, and they said: “The temperatures are relatively stable, because they’re being controlled, so there’s really no correlation with fermentation time and temperature. There might be a little correlation with the how open that microvalve is, but it’s really low. And even the vessel is not that big of an effect. You might be good enough just looking at the density and the brand.”

That’s what we did, and it ended up being this very simple model. We’re looking at literally one variable, and we’re contextualizing it with the PI Integrator, so we can group it by brand. (The data scientists) said, “Let’s start here, and then we can build if it’s not accurate enough.” We’ve been within a few percent, generally speaking, of our predictions, which is plenty of accuracy for what we’re trying to do.

It’s easier to make it work if you start simple. It’s daunting, right? “Oh, we have to look at all of that?” No. You can start as small as you want. You can look at one little process, one little piece of equipment. And maybe once you look at that, you find, “Oh, OK, it’s this really simple thing, and we can easily translate this to other processes.”

Hear from Tim in person at the 4th annual Smart Industry conference in Rosemont, IL, this September!

PS: What benefits  have you seen from data analytics in the cloud that you weren’t getting when you were manually plotting data in a spreadsheet and looking for trends?

TA: As the fermentation might change over time, now we’re able to adjust (for that). Also, you can get an early warning if something (isn’t) going right just by seeing that brand curve vs. your measurements and saying, “OK, this batch curve is not lining up.” Long before you can see it through the process of fermentation, you can see that, oh, yeah, this (batch curve) is not normal."

If something is going wrong with a fermentation, a lot of times it's something to do with the yeast not being happy. That might mean there's something in the wort, but that's extremely rare, and we would know if something had gone wrong. More typically it's like the yeast is having a bad day, or we didn't get the right amount of oxygen into the fermentation. Generally, oxygen helps yeast start (at the beginning of fermentation) so it can respirate instead of ferment. Typically once a beer is fermented, you would never add oxygen to it, because then it's going to start aging the beer. You can add oxygen at the beginning, but you don't want to add it as it gets late.

That's the kind of thing where if we saw a slow fermentation, we can say, “All right, let’s maybe move to the next step early and raise the temperature set point, what we call free rise; that can help yeast. Or if it was early enough, we might say, “Let’s add some oxygen to it and help the yeast get started.” That kind of thing is good to have early warning of. The sooner you know something is wrong, the more options you have to fix it.

Our technology lead was looking at this the other day, and he was saying, since we’ve put this system in, we’ve cut 6% off our average fermentation time, and we’ve cut 4% off our average time in diacetyl rest (a critical stage of the fermentation process). It’s pretty cool; we’re definitely seeing a marked improvement. It’s a small amount of time per batch, but over the year, it adds up to a couple of extra fermentations.

PS: How does this help brewers use their time better?

TA: They are looking at those trends and saying, “We have a weird fermentation here.” They’ll talk to us; we’ll come up with a plan; this is what we’re going to do.

(Data analysis in the cloud) is a tool that helps them say, "OK, something's wrong, and then they might take a yeast count that they wouldn't normally take." Effectively, they’re spending their time on more-useful things and focusing on the fermentations that aren’t going well rather than walking around to all of the fermenters no matter what and getting samples. It allows for that focus on abnormal batches, and checking all of that at the right time rather than just checking every batch all the time.

PS: What lessons have you gleaned that you're going to apply when Deschutes opens its Roanoke, VA, brewery in the next couple of years?

TA: I think a big thing that we've learned from this is that this data is extremely useful. It's a great thing to take to senior management, to be able to say, 'We've saved this much time off fermentation, and we've saved a bunch of money on purchasing equipment, and we've saved time." We'll definitely set these models up in Roanoke.

The other thing here is when you get that…you're getting more consistent beer out. It tends to taste more consistent, to have the same flavors every time, and that's going to be key for Roanoke. Having that flavor match is going to be really important. So having the data is great, because the more data we have, the more we can compare, so if we're not getting flavor match, we can say, '"OK, why not?"

It is going to be the next level; there will be more instrumentation there getting that data out, and we will be using that to help us quickly open that plant and be able to produce the same beer that we produce here.