woman-163426_640First things first. Because it may have been a while since those Six Sigma classes, let’s take it from the top. If you gather together in one place a repeated collection of actions and variable in series, you have a process. If you gather together a collection of processes, you have a system. The process of turning raw data into useful information starts here.

Too Much Data, Not Enough Information

We’re swimming in a sea of data, but only just. It would be closer to the truth to say that we’re drowning in it. There are a number of reasons for this, but one of the bigger ones has to do with Cisco’s “Internet of Objects.”

Since everything from clock radios to refrigerators to toasters, to the robots down on your assembly line and your fleet vehicles can be on the internet, and have a variety of sensors attached to them, that means everything can be a source of data.

Data’s ridiculously easy to collect. What’s harder is to turn it into something useful. With the above in mind, the steps for taking all that raw data and turning it into good information begins with the act of defining critical variables.

Defining Variables

Just to give you a simple example, if you’ve turned your fleet vehicles into internet objects and applied sensors to them, they probably tell you everything from the car’s engine temperature, to the tire pressure, to how often your drivers run their air conditioning, to oil levels and miles driven. Let’s say though, that what you’re really after is to maximize your fleet’s fuel economy, and to do that, you settle on two specific bits of data on which to focus.

First, you want to keep tabs on the car’s tire pressure, and second, you want to keep tabs on average driving speed. You begin then, by filtering out all the “noise,” or the data that you don’t wish to utilize just yet, and distill it down to the data points you do care about. It’s still not information, but filtered data is better than unfiltered as it gets you closer to something you can actually use.

It’s As Easy As A Spreadsheet

In simplest form, the easiest way to take that data and organize it into useful information would be the humble spreadsheet. Building on the example above, you may organize it into specific vehicles and drivers as column headers, with date sorted data entries per vehicle constituting rows. From there, patterns begin to emerge, and you see the first glimmers of actual useful information.

From here, for example, you can see that Fleet vehicle six is running about 8psi below optimal tire pressure, and has been for about two weeks. That’s a fixable problem you didn’t know about until just now. Likewise, maybe you’ve found that the driver of vehicle 3 has a lead foot. Not only has he gotten three speeding tickets over the last eight months, but he’s driving an average of 11 miles per hour over the speed limit. Again, easily fixable.

So that’s a simple example, but of course, your business isn’t so simplistic. You’re collecting orders of magnitude more data than the example I just outlined, and in order to make sense of it, while the first step, variable identification, remains the same, the second becomes a bit more complex and time consuming, and now we’re venturing into the realm of big data and business intelligence.

In-house Or Outsource?

Two choices here. You can either start pulling a dedicated team together to handle the large scale data processing you’re going to require, or you can outsource that function to someone who specializes in it. This is yet another new frontier for outsourcing, and there are already companies that have demonstrated vast efficiency at handling this kind of service.

Used with permission from Article Aggregator