Skip to main content
Version: Next

Classification - Tool Failure Prediction

Predicting machine failure using operational sensor data and machine characteristics.

Dataset Source: Industrial Equipment Monitoring Dataset Problem Type: Binary Classification Target Variable: Machine failure (0 = Normal operation, 1 = Failure) Use Case: Predictive maintenance for industrial equipment to prevent unexpected downtime and optimize maintenance schedules

Package Imports

!pip install xplainable
!pip install xplainable-client
import pandas as pd
import xplainable as xp
from xplainable.core.models import XClassifier
from xplainable.core.optimisation.bayesian import XParamOptimiser
from xplainable.preprocessing.pipeline import XPipeline
from xplainable.preprocessing import transformers as xtf
from sklearn.model_selection import train_test_split
import requests
import json

from xplainable_client.client.client import XplainableClient
from xplainable_client.client.base import XplainableAPIError

Xplainable Cloud Setup

# Initialize Xplainable Cloud client
client = XplainableClient(
api_key="", #Create api key in xplainable cloud - https://platform.xplainable.io/
hostname="https://platform.xplainable.io"
)

Data Loading and Exploration

# Load dataset
df = pd.read_csv("https://xplainable-public-storage.syd1.digitaloceanspaces.com/example_data/asset_failure.csv")

# Display basic information
print(f"Dataset shape: {df.shape}")
print(f"Target distribution:\n{df['Machine failure'].value_counts()}")
Out:

Dataset shape: (10000, 14)

Target distribution:

Machine failure

0 9661

1 339

Name: count, dtype: int64

df.head()
UDIProduct IDTypeAir temperature [K]Process temperature [K]Rotational speed [rpm]Torque [Nm]Tool wear [min]Machine failureTWFHDFPWFOSFRNF
01M14860M298.1308.6155142.80000000
12L47181L298.2308.7140846.33000000
23L47182L298.1308.5149849.45000000
34L47183L298.2308.6143339.57000000
45L47184L298.2308.71408409000000

Dataset Overview: Machine Failure Prediction

This dataset is designed for predictive maintenance, focusing on machine failure prediction. Below is an overview of its structure and the data it contains:

  1. UDI (Unique Identifier): A column for unique identification numbers for each record.

  2. Product ID: Identifier for the product being produced or involved in the process.

  3. Type: Indicates the type or category of the product or process, with different types represented by different letters (e.g., 'M', 'L').

  4. Air temperature [K] (Kelvin): The temperature of the air in the environment where the machine operates, measured in Kelvin.

  5. Process temperature [K] (Kelvin): The operational temperature of the process or machine, also measured in Kelvin.

  6. Rotational speed [rpm] (Revolutions per Minute): This column shows the speed at which a component of the machine is rotating.

  7. Torque [Nm] (Newton Meters): The torque being applied in the process, measured in Newton meters.

  8. Tool wear [min]: Indicates the amount of wear on the tools used in the machine, measured in minutes of operation.

  9. Machine failure: A binary indicator (0 or 1) showing whether a machine failure occurred.

  10. TWF (Tool Wear Failure): Specific indicator of failure due to tool wear.

  11. HDF (Heat Dissipation Failure): Indicates failure due to ineffective heat dissipation.

  12. PWF (Power Failure): Shows whether a failure was due to power issues.

  13. OSF (Overstrain Failure): Indicates if the failure was due to overstraining of the machine components.

  14. RNF (Random Failure): A column for failures that don't fit into the other specified categories and are considered random.

Each row of the dataset represents a unique instance or record of the production process, with the corresponding measurements and failure indicators. This data can be used to train machine learning models to predict machine failures based on these parameters.

df = df.drop(columns=["Product ID", "UDI", "TWF", "HDF", "PWF", "OSF", "RNF"])
df
TypeAir temperature [K]Process temperature [K]Rotational speed [rpm]Torque [Nm]Tool wear [min]Machine failure
0M298.1308.6155142.800
1L298.2308.7140846.330
2L298.1308.5149849.450
3L298.2308.6143339.570
4L298.2308.7140840.090
........................
9995M298.8308.4160429.5140
9996H298.9308.4163231.8170
9997M299.0308.6164533.4220
9998H299.0308.7140848.5250
9999M299.0308.7150040.2300
df["Machine failure"].value_counts()
Out:

Machine failure

0 9661

1 339

Name: count, dtype: int64

X, y = df.drop(columns=['Machine failure']), df['Machine failure']

X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)

1. Data Preprocessing

opt = XParamOptimiser()
params = opt.optimise(X_train, y_train)
Out:

100%|██████████| 30/30 [00:02<00:00, 14.47trial/s, best loss: -0.9315957452699021]

2. Model Optimization

3. Model Training

model = XClassifier(**params)
model.fit(X_train, y_train)
Out:

<xplainable.core.ml.classification.XClassifier at 0x2a0d61de0>

4. Model Interpretability and Explainability

model.explain()
params = {
"max_depth": 4,
"min_info_gain": 0.05,
}

model.update_feature_params(features=['Rotational speed [rpm]', 'Tool wear [min]', 'Air temperature [K]', 'Process temperature [K]','Torque [Nm]'], **params)
Out:

<xplainable.core.ml.classification.XClassifier at 0x2a0d61de0>

model.explain()

In this snapshot, we demonstrate the impact of hyperparameter tuning on model interpretability. By adjusting max_depth and min_info_gain, we refine the feature wise explainability and information criterion, respectively, which in turn recalibrates feature score contributions. These scores, essential in understanding feature contributions to model predictions, are visualized before and after parameter adjustment, illustrating the model's internal logic shifts. This process is critical for enhancing transparency and aids in pinpointing influential features, fostering the development of interpretable and trustworthy machine learning models.

5. Model Persistence

# Create a model
try:
model_id, version_id = client.models.create_model(
model=model,
model_name="Asset Failure Prediction",
model_description="Using machine metadata to predict asset failures",
x=X_train,
y=y_train
)
except XplainableAPIError as e:
print(f"Error creating model: {e}")
Out:

0%| | 0/6 [00:00<?, ?it/s]

6. Model Deployment

The code block illustrates the deployment of our prediction model using the client.deployments.deploy function. The deployment process involves specifying the unique model_version_id that we obtained in the previous steps. This step effectively activates the model's endpoint, allowing it to receive and process prediction requests. The deployment response confirms the successful deployment with a deployment_id and other relevant information.

try:
deployment_response = client.deployments.deploy(
model_version_id=version_id #<- Use version id produced above
)
deployment_id = deployment_response.deployment_id
except XplainableAPIError as e:
print(f"Error deploying model: {e}")

Testing the Deployment programatically

This section demonstrates the steps taken to programmatically test a deployed model. These steps are essential for validating that the model's deployment is functional and ready to process incoming prediction requests.

  1. Activating the Deployment: The model deployment is activated using client.deployments.activate_deployment, which changes the deployment status to active, allowing it to accept prediction requests.
try:
client.deployments.activate_deployment(deployment_id=deployment_id)
except XplainableAPIError as e:
print(f"Error activating deployment: {e}")
  1. Creating a Deployment Key: A deployment key is generated with client.deployments.generate_deploy_key. This key is required to authenticate and make secure requests to the deployed model.
try:
deploy_key = client.deployments.generate_deploy_key(
deployment_id=deployment_id,
description='API key for Tool Failure Prediction',
days_until_expiry=7
)
print(f"Deploy key created: {str(deploy_key)}")
except XplainableAPIError as e:
print(f"Error generating deploy key: {e}")
Out:

Deploy key created: 76a66348-5af7-471e-9b4a-b18233ce4325

  1. Generating Example Payload: An example payload for a deployment request is generated by client.deployments.generate_example_deployment_payload. This payload mimics the input data structure the model expects when making predictions.
#Set the option to highlight multiple ways of creating data
option = 2
if option == 1:
try:
body = client.deployments.generate_example_deployment_payload(
model_version_id=version_id
)
except XplainableAPIError as e:
print(f"Error generating example payload: {e}")
body = []
else:
body = json.loads(df.drop(columns=["Machine failure"]).sample(1).to_json(orient="records"))
body
Out:

[{'Type': 'L',

'Air temperature [K]': 300.8,

'Process temperature [K]': 312.0,

'Rotational speed [rpm]': 1374,

'Torque [Nm]': 50.2,

'Tool wear [min]': 154}]

  1. Making a Prediction Request: A POST request is made to the model's prediction endpoint with the example payload. The model processes the input data and returns a prediction response, which includes the predicted class (e.g., 0 for no failure) and the prediction probabilities for each class.
response = requests.post(
url="https://inference.xplainable.io/v1/predict",
headers={'api_key': str(deploy_key)},
json=body
)

value = response.json()
value
Out:

[{'index': 0,

'id': None,

'partition': '__dataset__',

'score': 0.2271685523961836,

'proba': 0.06946049314245507,

'pred': 0,

'support': 331,

'breakdown': [{'feature': 'base_value',

'value': None,

'score': 0.035522388059701496},

{'feature': 'Type', 'value': 'L', 'score': 0.024408834293742288},

{'feature': 'Air temperature [K]', 'value': '300.8', 'score': 0.0},

{'feature': 'Process temperature [K]', 'value': '312', 'score': 0.0},

{'feature': 'Rotational speed [rpm]',

'value': '1374',

'score': 0.18484190508536333},

{'feature': 'Torque [Nm]', 'value': '50.2', 'score': -0.011265812711409608},

{'feature': 'Tool wear [min]',

'value': '154',

'score': -0.0063387623312138736}]}]