The key maintenance challenges manufacturers are facing - Part One - With Andy Gailey

Welcome to the Trend Detection podcast, powered by Senseye, an industry leader in using AI to drive scalable and sustainable asset performance and reliability. This is a new publication designed to help you go away with ideas on how to achieve maintenance efficiencies.

 

For this three part series, we're joined by Andy Gailey, founder UPTIME Consultant Ltd who provide a holistic approach to asset care and maintenance management.

In the first episode of this series, we discussed how much maintenance and unplanned downtime has changed over the past 20 years, and the major challenges that manufacturers face today.

Please subscribe via your favorite podcast provider (Apple, Google and Spotify) if you'd like to be notified about future episodes and let us know your feedback by leaving us a review.

 

Transcript

Key topics covered (click to jump to the section)

  1. The impact of unplanned downtime
  2. Culture and predictive maintenance
  3. The role of the IT team
  4. The key maintenance challenges
  5. Subscribe to our podcast

Niall Sullivan, Senseye: Welcome to the Trend Detection podcast or this series of the Trend Detection podcast. I'm really pleased for this series to invite and talk to today Andy Gailey, who knows Senseye very well. Really good to have you today, Andy. Andy is a consultant and we are UPTIME Consultant. He'll explain just to set, what he does and we'll dive into the conversation, which is going to be all about maintenance. All those people who love maintenance are going to love this conversation, I think. So I'll hand over to you, Andy, just to briefly introduce yourself and your experience.

Andy Gailey: Thanks, Niall. Myself and Niall have known each other for about a year now, but I've had a relationship with Senseye for probably about five years, I think. We met through a client we both were interacting with to do with predictive maintenance. So I'm Andy Gailey, I'm the owner and founder of UPTIME Consultant Limited, based here in Coventry in the United Kingdom. We tend to do most of our business all within the UK, but we have colab work as far as Mexico for my previous employer, PepsiCo.

Formed UPTIME Consultant seven years ago, and the idea was to bring together all of the condition monitoring aspects, maintenance strategy, lubrication, look into the future with connected technology, under an umbrella so that when I go to a client, we could start with lubrication and looking at criticality, but then we could move into what are the critical spares we need to bring in, what does the failing modes look like on these particular assets?

And is there a predictive technology we can use to put the plan maintenance to one side and go on condition? And so, my background is on a mechanical production engineer by trade. I started when I was a teenager, with an apprenticeship, and I spent my formative years in aerospace with a turbine manufacturer. And then I moved into subcontract machining and again, aerospace and prototypes. And then I found myself in motorsport for about six years, working on rally championship cars and four wheel drive systems, Jaguar XJ220, I was involved with that, McLaren F1. And then in the mid 1990s, I applied for a job and ended up in the food industry working for PepsiCo International. The brand I worked for is Snack Foods, which is based in the UK. And I worked in that company for just over 20 years, making salty snack food, including Doritos, squares, Wotsits, anything that was considered a snack, not a potato chip. And in the last nine years of working there, I was asked to go to one side and to formulate a condition monitoring and lubrication strategy for the business. So that's me,

Niall Sullivan, Senseye: Fantastic. Very good intro and lots of interesting use cases I'm sure as well. I wasn't really going to start and talk about unplanned downtime, which funnily enough, we're releasing a big report next week, so that's a little bit of a plug there, but I was going to ask how unplanned downtime has changed in the last 20 years. But I think I'd prefer to start, if it's okay, looking at how maintenance itself is, and I'm sure unplanned downtime features in there quite heavily, but could you give a guide of how that's changed throughout your career, Andy?

Andy Gailey: I actually became... Most maintenance engineers or people involved in reliability, I fell into it by accident. Up until I joined PepsiCo, I was in manufacturing as in making things so removing metal off of aluminum or magnesium casings to make transaxles and things like that. And it was only when I went into the food industry, I worked then really in production working on process and packaging lines. And that was an all-encompassing role, as a technician. So we weren't called maintenance engineers, we were called technicians because we were expected to do everything. So that included the chemical boil-outs or any other low line maintenance work. There's anything specialists, we would bring in specialists to do that, so anything that involved regulations with gas. So I transferred my skills from making things in production engineering into becoming somebody that looked after assets.

And the main thrust was, and I still see in a lot of companies, is reacting to things that you didn't expect were going to happen. So the unplanned downtime aspect is still massive. It is never going to go away because it's an endemic thing. Things fail over time through different failure modes. And we can attack everything with either a planned, or a predictive, or a proactive form of maintenance. Most companies either have as the system they work to, so they involve engineers to go and fix these things as they go wrong. And then they might bring in, the secondary step, might be to bring in some planned inspections that says, "Let's go and look at this and tick a box every 30 days." And that does become a sense check in a tick box, if there's nothing in there specific for somebody to action when they go and do that.

If you go to the next level, people are going to see that are kind of advanced, will have some sort of predictive technology they're using. And it will normally be something like ultrasound meters or vibration detectors. And if they're heavily electrical, they might have thermal images. But often I find that these pieces of equipment are kept in a cabinet because it costs a lot of money and people are denied access to them or trained upon them to get the best value. So they end up in a situation where they're still reacting to the everyday. And the thing is that is a vicious cycle and people are employed as maintenance engineers basically to turn up for 10 or 12 hours on a shift and to go from one failure to the next. And it's a soul destroying place to be. I've been in that situation on shifts, and the time goes quickly, but you find yourself day after day, week after week, revisiting the same old nightmare that you've had before. And that's basically with the story when I worked until it decided it wasn't fit for purpose.

It's still a big thing, it doesn't change. People think everything's bright and rose and everybody works predicatively, but it hasn't really changed in my experience.

Niall Sullivan, Senseye: I was going to say, in terms of unplanned down time itself, what impact have you seen with companies you've worked with in the past?

 

Andy Gailey: Well, the big thing about unplanned downtime is that it affects lots and lots of waste streams, KPIs, operations, turnaround, and they're drawn towards blame in the engineering team, the maintenance team, because they believe, and this again is a big misnomer, they believe that reliability of an asset can be maintained in, and it can't.

So anything you buy, as an asset, whether it be a lot of what I worked with, which was fryers and ovens and conveyors and bagging machines, but it can even be your car. So you go and buy a car and whatever's going to go wrong with that car, you can't maintain it to be better. It's already been built in. The thing that you can do, is that you can preempt that unplanned event through using predictive technology. You can even start just with people using their eyes and ears. It's amazing how many people work with assets.

And when you go and talk to the operators operating those assets, they've got masses of material and anecdotes that can tell you about what happens with their assets, and nobody's ever asked them. This is the real strange thing. I go into companies that have got problems with double percentage points and planned out on, and I'll get the same from both sides. I'll get the operators and maintainers saying they don't listen to us, and I'll have the managers. So when I speak to the production manager, operations manager, engineering manager, and even the plant manager, they'll say they don't listen to us. The way you got to do this, you got to break down these barriers, get people in the room together and sometimes it requires the third party... Hopefully, I can do that sometimes, the third party to say, "We're all in the game together. Let's decide what the plan of action is for your business to put some overarching things. How many tons of X have you got to make per day? What's the tonnage per hour? What line capabilities have you got? What do you expect? What's your planned attitude? So what's your change over time? How can you affect that as well?"

So one impacts the other, and unplanned downtime, as soon as it happens, waste start to form and to happen. Especially in the food industry, you use lots of heavy ovens, they use masses of gas and minutes of gas is throwing hundreds of pounds on the floor. You can't stop them, you have to keep them low fire. And then you may have hours of repair or they're all pulled downstream to get the line running again. So you generate waste, your yield goes away and whatever ingredients you using and your labor stand idle as well. These are multiple waste streams go on.

Niall Sullivan, Senseye: Just to touch on your earlier point actually, about getting the different parties in a room together, because I think that's something we see At Senseye as well, there's always a disconnect across departments and stuff. From your point of view, do you feel there needs to be effort or successful projects for there to be a cultural shift?

 

Andy Gailey: Say if we talk about a platform like Senseye or any other predictive technology, at the end of the day what it's aimed at is making business sense for the client. So the client, whether they've got to make a certain number of cars or a certain tonnage of product, they aim to making that better, making it a better outcome for everybody. And it's amazing when you go into some companies, how you find the different aspects of the company are siloed, so that operations will work and expect things of maintenance, and maintenance don't understand and vice versa. So it's very important, there's a massive human aspect to it that even say with Senseye, it's the human interaction, it's the people that support the platform, the back office people at Senseye and then obviously the guys that go and support the products on people's sites.

But also it's the feedback from the technician, the engineering group, to say, "Actually that insight we had from the Senseye application gave us a really good outcome." And the thing is to go and measure that, put a monetary value on it or a time value on it and be able to report that as a benefit of using that platform. So the experience I had at PepsiCo when we realized the maintenance wasn't providing the value and operation weren't getting the availability, was we completely changed the system. There was a day one that went back to 2005 and everybody, from the operators, the maintenance, the managers of maintenance and engineering, operations group, health and safety, the utilities engineer, the plant manager, all in the same room and all got on the same page. So we all agreed that we've got a name and that we could provide benefits to other people.

So in other words, maintenance really is the arbiter of availability for operations to hit their targets. And in our case, it was to produce pallets of goods that Tesco, Sainsbury's or Asda had to have, not wanted, had to have. There was a commitment, shelf space, there's a whole tube stream that runs. And the thing is people don't realize is, once a product doesn't appear on a shelf because of an unplanned diet or unplanned outage, that shelf space can be lost and it is real estate, it costs a lot of money for companies to get. So it's a big joined up piece. And the best thing that I ever saw, in my experience, was as soon as we were on board with where we were going to go, so we decided where we were and it was in a bad place and we had some external help looking at reliability center maintenance.

Some of us went on training courses, I did. And we got to understand the mindset of working proactively and predicatively and that reliability is something that can be affected, it can be reduced over time. And we understood that it was going to be short, medium term and long term goal that we're all going to work towards. And a few months into that, the next announcement was, the Operations Manager was going to swap roles as a secondment for one year with the Engineering Manager, and the Engineering Manager was going to pick up the Operations Manager's role. So basically these two guys swap roles and they knew they had one year, so they knew they were going back. They knew they relied on each other to look after their seat while they went into maybe an aspect they didn't fully understand. And for that made these two people like twins basically, they stuck together over that year because they understood, they both were relying on each other.

And when they went back into roles, straight away, obviously these people now understood what do operations require from us. And operations understood what engineering requires then to make available for them to hit their goal of availability. That was the best thing I ever saw happen. And I think it's only a company like PepsiCo that could actually have the temerity to go and do something like that, to go and just say, "Let's reinvent it." They were a company that, I were there for 20 years, but it's like I worked for 10 different companies. Because every two years you revisited what you were doing, you reinvented the model, you didn't stay static, your goals were stretched, you stretched your goals for yourself as well. You were encouraged to almost try things just to fail fast and to see, is there any value in this or can we just now pull that to one side?

Niall Sullivan, Senseye: And in terms of technology specifically, we've talked about operations, maintenance, what role does IT play, obviously on the technology side, from your experience, because I've heard differing accounts, depending on which companies you speak to, IT are heavily involved in decisions or they're just there to be told, "We want to buy this product, procure this product for us." So I was just wondering, from that point of view what you've seen.

 

Andy Gailey: I could probably only talk it from that experience that I had, that IT was one of those things that were... Actually, other than the systems engineers that were on site that supported the operation with the assets, the IT department was siloed, as in it was offsides and it was the third party, it was in the central location. And they looked after various locations. Their modus operandi was, they would like to have nothing new come in, at all, that would affect their IT level because it gives either something they have to take a risk with or it might be something that I haven't got any bandwidth for. So those things affected. IT have to be, in the case of Senseye application like that, IT have to be very involved with what's going to go on. And I know some companies, when I talk about, say Senseye or other companies, they're cloud based, they put their hands up and they go, "Well, we don't want to do cloud based." But then when you go and look at their operation, you look at legacy equipment that's hanging around and USBs are lying around in drawers, you question their knowledge about security of their infrastructure when they're running an old XP Windows PC and they've got uncontrolled USBs lying all over the place.

Again, I'm not very on fave with IT, but IT, that kind of party has to be at the table so that they understand what's going on.

Niall Sullivan, Senseye: That's interesting. I was at an event this week and there was still a lot of talk about, when you mentioned cloud straight away it's, "Can we do on premise? And it's coming back into the conversation. Because from people I speak to at events and things, it seems like there's been a shift, but it seems in some aspects to reverse back again.

Andy Gailey: It's just like maintenance. I think there is still some old school thinking that they see on premise is more secure. It doesn't matter. They've got all our other systems, SCADA systems, networked and open to attack from outside. They see, for some reason, they see cloud, that is now ubiquitous, as being some sort of risk. Well, everything's a risk if you don't take mitigation. Obviously, you have to speak to your supplier and make sure mitigation's in there and that it is a very low risk. As it is with, and again, when I've spoken to people, clients, thinking about this, I've said, "How important is the outputs and packages you're going to take out of a predictive maintenance tool going to be in one shot to a competitor or to somebody that's a malicious actor?" There's probably less than zero. Even if they had a whole trend base of a particular asset in a particular location, it's probably going to be worth very little. If there's a control aspect to it, and now obviously that's where the concern needs to be, so if somebody can turn your line off, or disconnect all the power to your forklifts, or make all your satnavs and your trucks go blank screen, then that's the kind of thing they should be looking at.

Niall Sullivan, Senseye: Exactly. Absolutely. And actually speaking of that, you could call that a challenge, going over that cloud hump as it were, but also we talked about unplanned downtime, we've talked about the culture shift, or disconnect let's say, within organizations. Are there other major challenges, I guess let's say forward to today, that organizations are really struggling with, whether it's maintenance, whether it's due with assets, and I know in the current climate, I have a feeling where you might start with that, but I won't put words into your mouth.

 

Andy Gailey: Looking at what's happened over the last six months and so, it's very, very hard to call what's going to happen even in 12 months, never mind two or five years. Power, the use of power is going to become more expensive. And I do know there are people in the food sector that are struggling with that type of problem. They've probably hedged their power supply, but that only happens for so long. You can only hedge for three to six months and then you'll be back to spot price. Things like compressed air have always been expensive, is the most expensive utility you can buy. So understanding where your power losses are from that point of view. Transport is looking like diesel, is going to go through the roof. There are people now talking about $300 a barrel for fuel oil, so that's three times the cost. Transportation is disrupted with the events we had of two years ago and it hasn't recovered, and I can't see it recovering personally. I think that the supply chains, they were already on the edge, back in 2019. They were ready to break and collapse, and it just took that nudge to push it over the top.

Making sure that your planned production gets over 95% availability and product out the other end of the line is going to be very, very important. A lot of the low line stuff is probably gone, probably already weaned out all of their skilled staff and put in overpaid staff. They may be struggling with operating and maintaining equipment. There's lots and lots of challenges.

Niall Sullivan, Senseye: Absolutely. And it's actually further down the supply chain as well, for all those issues you mentioned. Absolutely. And in terms of to solve those challenges, is it a case of there's lots of different maintenance approaches, which maybe you could detail a little bit, but which one's the most effective, because it's not just a case of you only do predictive maintenance and that's it, unless you think differently of course, but which maintenance approach provides the right blend to achieve and overcome some of those challenges?

Andy Gailey: For me, there's three aspects that you can approach, you can take to maintenance. You can take the very, very traditional one of, we will run this equipment until it starts complaining or breaks and then we'll fix it. So a fully reactive workforce that only looks at basic cleaning and fixing things when they broke. Then, as companies, if they're a bit switched on, they might involve then some planned maintenance inspections. And these usually migrate from an OEM manual. The OEM doesn't really know a lot about the assets in production, they just make the assets. So the maintenance that is involved and included in those manuals usually is not fit for purpose, they don't know how hard you're going to run your plan. A lot of people try to run the plan above the operating envelope anyway, from day one, they try to stress it.

So that's the next thing. You see lots of plan maintenance, but probably not providing all the value that you expect. So you still have companies that have maybe 30% of their input is reactive work and they might have another 60 to 70% of that would be planned and very little proactive or predictive. The proactive work they do might be a bit of lubrication, as part of the planned aspect. The ones that get it and actually move ahead, not only in their business, but against their competitors, are the ones that go and study things like RCM, so that's reliability centered maintenance. Understand that reactive has a place, it is part of an overall strategy of maintenance. There are pieces of equipment that are either mitigated by duplication or they're such low level that they can be acted upon within 30 or 45 minutes and bought back online.

And then, with that, then you would go and look at, I would say, a criticality study of what you've got. So you have to understand the plan you've got and understand the failure modes effects on that plant. And then you aim your predictive and proactive tools, so vibration, ultrasound, tomography or sampling acoustic emission at the failure modes. If there are hidden failure modes, as it says within reliability center maintenance, then you come up with a planned preventive operation that you can go in and either check for that failure mode or make that failure mode good, so in other words, preemptive. There's a lot of people still believe that everything fails on a bathtub curve. So you get lots of incipient failure at the start and lots of wear out failure at the end, but everything's hunky dory within the middle of that bathtub curve and you get sporadic failures, but there are many trends to failure. There's been six identified, I think, within LCM. Some of those are... The majority of them are actually random in the timeframe. I think it's 82%. 82% of all failures that take place within rotating and moving assets is a random event. So if you've got a random event, and you want to put a timed event to go and inspect it against it, and you're wasting your time. You can't go and look at every 30 days if it's a random event.

So you've got to put some of the strategy in and that is, you look at it with a predictive tool.

Niall Sullivan, Senseye: Exactly. I like your analogy there. I like the bathroom tub. I quite like that. That's interesting.

Andy Gailey: I was taught that as an engineer back in the seventies. A little bit of maintenance training I did have, they were talking about the bathtub curve. So it looks like this, every engineer's been through an apprenticeship, notes about the bathtub curve. Things fail early, then they decline to a random failure mode and then things wear out. And that's what all engineers were taught in the seventies and eighties, that this is how things work. Prior to that, if you went back to the Victorian area, they just thought that things wear out over time, and most things in the Victorian area did. They were very simple systems and they started new, they over-engineered like steam trains, and then over time the surfaces wore and then they trended towards a failure at the end.

Very few things follow that, wear at failure curve, most people follow either a straight line, or the most prevalent one is infant mortality, fails very, very early in the life cycle of the... So if you're going to go and change a bearing out and yet it's not failed, the chances are you've now put in a failure, because there's more chance of the bearing failing through something you've done as you've put it on. So you've either may have assembled it incorrectly, you may have installed incorrectly, you may put the wrong lubrication or no lubrication in. Purely by taking an action through a planned event, that you're going to change a bearing every 60 days, you could actually be a detrimental effect on the whole asset.

 

Subscribe to our Trend Detection podcast

We post weekly episodes on a wide range of different subjects including predictive maintenance, industry 4.0, digital transformation and much more.

Here are all the places you can access/discover the latest episodes of the Trend Detection podcast: