Abstract
The current economic design of control charts assumes specific quality distributions, limits parameter choices, and over-relies on historical samples, hindering companies from determining the most economical parameters accurately. Leveraging industrial big data, we propose a data-driven, mixed-integer linear programming model for the economic design of adaptive control charts. Control limits are dynamically designed as a function of features to minimise quality costs. Considering the trade-off between false alarms and penalty costs, we develop three models: a basic model incorporating big data, a model with cost-penalised features, and a model that uses regularisation to manage overfitting. We simulate the model using new performance measures. Our findings demonstrate the economic value of adaptive control limits strategies incorporating feature data compared to benchmarks. We expanded the model to an endogenous sample size and sampling interval framework, further demonstrating the superiority of our approach. We undertook a case study using real-world data from a casting company and revealed that employing our approach culminates in a 24.6% reduction in costs relative to the company's existing quality control protocols. Our approach enables manufacturers to make strategic decisions about quality control by operationalising big data, thereby proving advantageous in reducing quality costs.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.