How do you make everyone believe that this transformation journey is for betterment?
Aug 2, 2019Don’t be a Procurement Dinosaur!
Jun 10, 2019IoT-Enabled Asset Tracking Is Driving Business Innovation
May 7, 2019Successful Supply Chain Leader Profiles
Feb 28, 2019Operations in Year 2022
Feb 25, 2019Operations in Year 2022
I am motivated by the vision of
how manufacturing operations industry and procurement will evolve in the year
2022. To me, the new foundation for the role of an effective CPO and COO in
2022 has already been laid out. Having core domain knowledge of procurement and
supply chain operations is a pre-requisite, but having a hands-on understanding
of data science and automation in business scenarios is where the strategic
direction lies for CXOs.
VALUE STREAM
MAP
The diagram below shows a value stream map of a manufacturing organization’s operation process. It is an arrangement of 6 major verticals, starting from the Sales Forecast to Order Fulfillment.
Each vertical has its own individual business environment variables and multiple stakeholders. For example, the Sales Forecast is generally a consolidated demand summary statement of each individual sales office or unit in the distribution network.
Sales and
Operation planning (also known as S&OP) is an optimization activity where
SKUs production cost, lead time etc are the objectives to be optimized.
It is
important to know that an efficient interaction between each of these verticals
is extremely important for the bottom line of a company. An efficient
interaction means a seamless transfer of clear, specific and accurate
requirement. How I wish this was the case. In my view, demand forecasting and
S&OP continue to be the initial point of “Leaky Buckets” phenomenon. “Leaky
Buckets” is my metaphor of an inefficient operation where costs are leaked due
to the poor quality of execution.
A COO must develop a set of performance indicators to measure the Leaky Buckets. On a prima facie, here is what I think the problems look like from a data science perspective. Although the diagram is self-explanatory, if you want to take the 3 key problems from a data science perspective they are data forecasting, data availability and data optimization.
PRESENT PRACTICE
Before I
propose a solution I think it is important to explain the “As-is” scenario,
i.e. The Present Practice.
Sales
Forecasting
Let us take
the Sales Forecast as an example. We all are aware of the bullwhip effect in
the supply chain. How does it start in the first place? The answer is
inaccurate sales forecasts.
Each sales
unit in the distribution network does forecasting of each SKU it sells every
month. The data from all the sales units are then consolidated into one Excel
File. Then, an average scaling/normalization function is applied all across the
SKUs in the direction proposed by management. Fair enough? No, it isn’t. I find
this approach lazy.
The process
has inherent problems in itself. Past data on a standalone basis is not
sufficient to generate an accurate sales forecast. A good sales forecast should
factor in the following:
1.)
Correlation between different SKU sales to identify the cannibalization (this
can be done for each region).
2.) Sales
seasonality
3.) Sales
boost due to promotions and discounts
My favourite
parameter to factor in is sales lost due to non-availability of stock. This is
where most ERP software and excel forecasts struggle. How do you model cases
where the sale was nil because of low demand versus stock out?!
S&OP
depends upon these forecasts to make production plans. A typical cycle from
Sales Forecast to Production Plan is 3–4 weeks long. That means any correction
in the forecast cannot be factored in before 3–4 weeks. Good luck with those
unsold inventories!
If a poor
sales forecast affects the top line of the company, a reactive approach to
procurement affects the bottom line. I have studied the procurement
methodologies of some leading manufacturers in great detail. Two words I
would use to describe the present state of procurement are reactive and
laborious.
Most CPOs
converge to an understanding that online platforms are the answer to solve the
laborious aspect. I disagree; most of the online platforms are mere data entry
user interfaces only. Also, during the development phase of such online
platforms ownership becomes an issue; either it an internal IT team or an
external consulting agency. Outsourcing your problem to a different team
doesn’t solve the issue; that is being lazy. ERP enthusiasts will present a
counter-narrative of MRP/MRP modules of today but I am skeptical about
garbage-in and garbage-out of the “One Size Fit All” ERP suites of today.
The world is
quickly moving from statistics to calculus. The majority of the current ERP
software implementation is statistics based and will become irrelevant in 2022
if a transition is not made. Most of the tools/analysis in the procurement
function is still done in Excel with short-term measurement tools such as bar
charts, weighted averages, cost indices, and shares, which is reactive.
THE WAY FORWARD
The future of operations in 2022 looks radically different than that of today. Data science and machine learning will become a pre-requisite skill for purchasing and operation managers. And yes, everybody will need to learn the basics of computer programming.
Operations
in 2022 will be shaped into autonomous, data-driven, and most importantly,
reliable activity. Only the definition of objectives will be determined by
managers; everything else will run autonomously. That’s OPS 2.0 for me!
How to Start
the Journey?
In my view,
the manufacturing operations industry can learn a lot form internet B2C
companies, which have already started the journey toward ops 2.o.o. The
answer? Machine Learning.
In Ops 2.0
the “Leaky Buckets” structure will be replaced with 3 simple verticals:
pre-production, production and post-production. Linking these 3 verticals will
be the service layer of data engines.
Three
principle areas required for the data management journey are:
1.) Data
Organization
2.) Data
Analytics
3.) Data
Availability
Data
Organization refers to the consolidation of all the different data sources into
one database. With the diversification of teams, this is the most
time-intensive operation.
Multiple
Excel files, the same information in different formats, different assumption
sets on the same set of problems, and multiple stakeholders are just a few
factors which put high stress on taking up the OPS 2.0 journey.
Step 1 is a collaboration. Online
platforms are a good step forward, but collaboration is the step to begin with “One-time correct information accessible to
every stakeholder” is the motto to adopt.
Step 2 is where the core domain knowledge of CPOs, business
managers, procurement managers and operation managers is required. Business
analytics will be powered by machine learning algorithms. No algorithm is
perfect and even while using standard algorithm, one has to choose the hyper
-parameters according to the business environment.
Machine learning
is all about optimization. But, it has to be guided by an objective or as the
term defined by data scientists: Cost function has either to be optimized or
classified. If you are counting on your IT team to
implement a one- size- fits- all machine learning algorithm prepare to be
disappointed.
In each of the 3 verticals, there will be
dedicated teams of data scientists needed for algorithm development + business
managers who are hands-on with python and minimal coding + IT team for making
the information accessible. This is where OPS2.0 will be driven by data layer
as service deployment. I wish you all the best for
OPS2.0 journey!
Whether engaged in an innovative side project, or working on a complex challenge, Procurement League community members have a problem-solving frame of mind and lots of perspectives to share. Learn from your peers as we showcase practitioners in the top of their fields. Our members provide up-to date and valued insights.
Share your knowledge