Past few months have taught us several lessons around resilience, and new models of collaboration leading to innovation across many areas
I had the opportunity of reading the book “The Unicorn Project” last December. As I collect my thoughts for an outline for this blog, I felt it would be a great idea to reflect on how closely the analogies echo between agile manufacturing and software development considering some of the software we have toyed with over the past year as a part of managing our software as a service offering. In my mind, these endeavors carry very similar motivations and thus, wanted to share a bit of perspective from both areas.
Typically, manufacturing companies have expanded into multiple geographies to get the best advantage of raw/input materials and talent. Software houses operated in a similar manner- cluster around the supply of raw material and talent. The value of gain in efficiency and quality is great in manufacturing (we still hold in awe the kind of procedures used by certain manufacturers). It is quite similar in software too. And last but not the least, as the industry matures, we see a lot of effort and time being spent in areas such as continuous learning: it has been a very popular concept in IT that acknowledges change as a “constant”! IT continues to take inspiration from manufacturing here to continuously improve operations.
A few areas I’d like to draw as discussion points – (1) Proof of concept (2) Routing (3) Digital Twin / monitoring, and finally, (4) Continuous improvements
(1) Proof of Concept – For anyone in the IT world, it is a common scenario that before embarking on a real development initiative, a smaller scale proof of concept is carried out. Now, in the manufacturing world, the adoption of technologies like 3d printing have really made it possible to validate several characteristics of the end product on a smaller scale before really the journey to build the real thing!
Additive manufacturing plays a critical role in shrinking time to market, validating some of the key performance parameters with a better cost characteristic. It is not just the capabilities and spread of materials that can be used in additive manufacturing that is constantly improving, there is active work ongoing to define the standards that govern additive manufacturing processes.
There is a similarity between such a standard evolving to let’s say a standardization we make in the way our microservices are deployed. e.g. UML/CAD modeling. What we see is in quite a few instances, 3D printing allows compression of several steps and if the POC is a success, enables a manufacturer to create a new offering in a disruptive manner with a shorter time to market. There are several industries that are using 3D printing as an effective mechanism to manufacture/parts required in small batches leveraging 3d printing. One of the more innovative usage of 3D printing innovation was seen in the PPE (Personal Protective Equipment) face masks in recent times. Thus, enabling a supply chain disruption in a time of crisis.
(2) Routing forms, the basis of all planning activity: from scheduling to capacity balancing to assignment of material needs and any document/info exchange steps. There is a pretty strong parallel between the microservice design paradigm and its evolution in IT vs Routing optimization in manufacturing.
How do we build it? Who/what are our bottleneck resources? What kind of scheduling do we carry out when planning critical steps like an Acceptance validation/performance benchmarking step? When do we integrate with the next component? These are critical steps that bake in the success criteria of IT projects. In a similar manner, capabilities to support innovative ways of managing your routing: including simultaneous order groups and any order groups provides shopfloor supervisors with the capability to plan their workload and balancing of priorities in the most optimal manner. In the middle of all of this, if there is a next “higher” priority order, how best do we handle our production so the existing as well as “new” item can be manufactured and shipped in an optimal manner.
In the same manner as the microservice architecture allows independent teams to collaborate even through changes/upgrades/etc, so long as a contract binds them together, it has been quite a clear strategy for manufacturing customers to segregate the duties carried out in their ERP vs the shopfloor of every plant (the activities at this unit driven by an independent Manufacturing Execution system that enables the right autonomy to follow the priorities/contract coming in from ERP, but, at the same time, offer the right flexibility of operation on the shopfloor).
Through the past year, we have improved several of our microservices and the resilience baked into them by following the principles of “hexagonal” architecture and segregating the concerns of DB management to the individual services and promoting a culture of development against a contract.
Despite the best architecture patterns we followed, we found a need for several robust monitoring tools to really help run our solutions as a well running 24/7 operation.
Right from being able to identify the health of database instance to the micro service CPU/memory stats to our integration glue like Kafka/Redis and the patterns of load that create negative impact, having a good “Digital Twin” of our components has been a big differentiator.
(3) Monitoring – This is exactly what we see from our customers too. Having data from shopfloor available to monitor (and more importantly, make the right data driven decisions) is a critical enabler of higher availability and better quality.
From each device, we have several data points that are accessible: if the past was about visual interpretation of such attributes on the shopfloor, we see a future where such data can be further analyzed on a data lake in the cloud and help customers draw several inferences that were not possible when such information was analyzed in isolation. The basic tenet we felt gave us a good bang for the buck: measure what matters; act on data and constantly improve based on the measurement. The minute we have a “measurable” operation, the ongoing changes/tweaks turn much more objective. It may also be possible that we start off measuring an “incorrect” attribute. At such a point, it is important to change the baseline, so we start off on a continuous improvement curve.
Also, it is critical to invest in “small steps” that bring a beneficial transformation. In IT, these may lie in small architectural changes/design changes. In Manufacturing, these may be small scheduling/optimization changes. It is important to measure the value of these small changes and validate in a smaller cycle prior to embarking on transformative steps. Whether it is a change in the database sizing or query optimization: both contribute to the same overall goal of getting more done. However, it is important to check whether we are getting more done with “less”!
Digital transformation of shopfloor data allows perform such a scale of analysis and also validate best practices from across plants.
(4) Continuous improvements vs big-bang changes: in the past, transformations were expected to be big-bang changes with long gestation cycles and a typical waterfall model of adoption with an ominous, all-or-nothing kind of evolution model. As agility reaches the next level, we see a much stronger preference to adopt innovation/changes in bite sized chunks. Rather than change the entire manufacturing solution overnight, customers show a preference to adopt simpler enhancements that does not need a large workforce to implement.
This is exactly how our solutions have started to shape themselves too: enable hybrid adoption at a pace that is convenient for our customers. Safeguard critical changes with “feature toggles” that allow customers to continue with less disruptive solution usage for a certain time before bringing such changes to their end users.
It is not just about feature toggles; we see a lot of value in shaping our solutions to offer a more seamless adoption capability: on-premise components can work with the latest and greatest innovations on the cloud. Cloud works with multiple ever-changing components offered from ERP, etc. But the ability to consume these capabilities firmly in the hands of our customers. What is often underestimated is the need to “continuously learn” along with “continuously improve”. It is not reality that a central team can out-innovate the collective intelligence of every factory floor. Similarly, a central architecture team may not always be able to tackle with the nuances that follow every microservice. However, as each team (central architecture/individual team) start to continuously learn, the results can be dramatically better.
In essence: decentralization with the right rules is good – allows faster innovation as well as better end results in the form of quality. Continuous improvements in smaller sizes make it possible to adopt the latest innovations easier – also, tighter feedback cycles, enabling us to minimize waste. Digital twin: this is a game changer!
As I share this blog, I am quite glad to say, we enhanced our presence from 2 data centers to 4, changed the architecture of over 8 microservices and were able to improve our availability characteristic a few more decimal points than where we began from at the start of the year.
The views expressed in this article are those of the author and may not reflect those of SAP.
About Author KG Chandrsekhar is a Vice President with Digital Manufacturing at SAP Labs India. Follow on twitter: @kgc_tweet
The post Manufacturing and agility appeared first on NASSCOM Community |The Official Community of Indian IT Industry.