To start, I have a confession: I enjoy creating features too much. In almost all of my data science projects, I end up creating thousands of features. Once this has occurred, I am left with the challenge of how to select a small subset of these features for my final modeling (a topic for a different blog).
I find this aspect of the project my best opportunity to use my creativity. I express this creativity in two forms. The first is to apply my understanding of the business and data to build features that capture business information and previously identified patterns in the data. The second is the application of my mathematics and science training to build features that have shown benefit in related problems.
These features are my best opportunity to improve the performance of my final models and drive real business value with my work. Sadly, this aspect of the predictive analytics lifecycle is rarely discussed. When talking with individuals new to the field, they neither understand its importance nor how to go about engineering features.
In this blog, I will show how to sequentially build meaningful features for a project to predict time to failure in hard drives using data provided by Backblaze. I will also be using the open-source package RasgoQL to execute SQL on my data warehouse directly from my local machine. For this analysis we will assume we are interested in making this prediction once a week and aggregate the data weekly.
Once this is done, I show how this can be applied to generate 10,000 features from the BackBlaze data in under four hours. If you would like to try this yourself, Backblaze makes this data freely available here.
The Backblaze data contains the S.M.A.R.T. data for each hard drive each day. Since Backblaze has stated they actively monitor five S.M.A.R.T. stats to help identify drives that are likely to fail: SMART 5, 187, 188, 197, 198.
First, we need to connect with our data warehouse and get the table that holds the Backblaze data. I use Snowflake, but other warehouses are supported.
Next, before we aggregate the data to a weekly level, we want to extract the week from the date and rename that value to WEEK.
Now we can aggregate the data by SERIAL_NUMBER (hard drive) and WEEK. We will calculate the weekly minimum, median, maximum, mean, and standard deviation for the first SMART value Backblaze mentioned: SMART_5_RAW (Reallocated Sectors Count).
This captures the most recent week of behavior for the reallocated sectors count, but often in problems like these, the trend of this information over time is most powerful. One of the most common ways of capturing this trend is to look at moving or rolling windows of data. In this case, we will calculate the minimum, average and maximum for all of the weekly features created in the prior step for the last four and twelve weeks. In addition, we will calculate the standard deviation for the weekly mean value already calculated.
The other common approach to capturing trends is using lagged features to include prior weeks values in this week’s data. The advantage of calculating the moving values is they can also be lagged. We won’t lag all these calculated values but will lag the moving average of both the weekly average and weekly median over the prior one through four weeks, eight weeks, twelve weeks, and sixteen weeks.
Finally, these lag values can be combined with the current weeks value to capture trends through the use of differences, ratios, and weighted moving averages. First, we define the mathematical operations we want to run.
And the names we want these features to have.
Then execute these math operations on the data.
We can now save the results of this work either as a view or table back on our data warehouse.
A sample of the records can be extracted by running preview or the entire data can be downloaded as a pandas dataframe for further work as
This all happens in only a few minutes in the data warehouse, and I am ready to move on with the rest of my modeling. In the past, simply extracting this data (93 million observations) from the warehouse would have taken much longer, let alone the time to perform these calculations in pandas (if I didn’t run out of RAM first).
Additionally, the sql can be checked as
Descriptive statistics about these features can be generated and printed
Finally, in the past, one of the most painful parts of feature engineering was working with the data engineering team to convert my Python feature engineering code to SQL so they could run it in their dbt workflow. But with RasgoQL, I can export a dbt model that the data engineers can run in production.
Because this approach was systematic and each step built on the prior step, I can easily write a function to perform all of these steps automatically for any given column.
Backblaze uses the five listed features to identify soon to fail drives by flagging any drive that has a nonzero value. To identify failures earlier, I want to explore other features to see if there is any signal to be found. For this reason, I run this function for all columns (all 124 raw SMART features) as
And finally I will extract all of the new features for modeling or dbt models for the data engineering team to place into production.