Data profiling
From Wikipedia, the free encyclopedia
This article or section has multiple issues. Please help improve the article or discuss these issues on the talk page.
|
Contents |
[edit] Purpose
Data Profiling is the process of examining the data available in an existing data source (e.g. a database or a file) and collecting statistics and information about that data. The purpose of these statistics may be to:
- Find out whether existing data can easily be used for other purposes
- Give metrics on data quality, including whether the data conforms to particular standards or patterns
- Assess the risk involved in integrating data for new applications, including the challenges of joins
- Assess whether metadata accurately describes the actual values in the source database
- Understanding data challenges early in any data intensive project, so that late project surprises are avoided. Finding data problems late in the project can lead to delays and cost overruns.
- Have an enterprise view of all data, for uses such as Master Data Management where key data is needed, or Data governance for improving data quality.
Some companies also look at data profiling as a way to involve business users in what traditionally has been an IT function. Line of business users can often provide context about the data, giving meaning to columns of data that are poorly defined by metadata and documentation.
[edit] Profiling metadata
Typical types of metadata sought are:
- Domain: whether the data in the column conforms to the defined values or range of values it is expected to take
- For example: ages of children in kindergarten are expected to be between 4 and 5. An age of 7 would be considered out of domain
- A code for flammable materials is expected to be A, B or C. A code of 3 would be considered out of domain.
- Type: Alphabetic or numeric
- Pattern: a North American phone number should be (999)999-9999
- Frequency counts: most of our customers should be in California; so the largest number of occurrences of state code should be CA
- Statistics:
- Minimum value
- Maximum value
- Mean value (average)
- Median value
- modal value
- Standard deviation
- Interdependencies:
- Within a table: the zip code field always depends on the country code.
- Between tables: the customer number on an order should always appear in the customer table
[edit] Types of Data Profiling Analysis
Broadly speaking, most vendors who provide data profiling tools, also provide data quality tools. They often divide the functionality into three categories. The names for these categories often differ depending on the vendor, but the overall process is in three steps, which must be executed in order:
- Column Profiling, which includes the statistics and domain examples provided above. This may also cover primary key analysis, which confirms or identifies primary key candidates. Experience shows that in many large systems data quality is such that primary keys are sometimes not quite as unique as might have been hoped for - duplicates, NULL's or otherwise malformed data may wreak havoc.
- Dependency Profiling, which identifies intra-table dependencies. Dependency profiling is related to the normalization of a data source, and addresses whether or not there are non-key attributes that determine or are dependent on other non-key attributes. The existence of transitive dependencies here may be evidence of second-normal form.
- Redundancy Profiling, which identifies overlapping values between tables. This is typically used to identify candidate foreign keys within tables, to validate attributes that should be foreign keys (but that may not have constraints to enforce integrity), and to identify other areas of data redundancy. Example: redundancy analysis could provide the analyst with the fact that the ZIP field in table A contained the same values as the ZIP_CODE field in table B, 80% of the time.
Column profiling provides critical metadata which is required to perform dependency profiling, and as such, must be executed before dependency profiling. Similarly, dependency profiling must be performed before redundancy profiling. While the output of previous steps may not be interesting to an analyst depending on his or her purpose, the analyst will most likely be obliged to move through these steps anyway. Other information delivery mechanisms may exist, depending on the vendor. Some vendors also provide data quality dashboards so that upper management, data governance teams and c-level executives can track enterprise data quality. Still other provide mechanism for the analysis to be delivered via XML. Often, these same tools can be used for on-going monitoring of data quality.
[edit] Open Source data profiling software
A number of Open Source applications exist to perform data profiling. The most notable ones are: