We’re seeing a rise in the demand for data scientists, but finding usable data is a challenge. How do we solve this to get a full picture of using multiple data points and sources in your tech stack?
A quick search for “Data Scientist” on LinkedIn yields 30,968 entries for companies seeking to fill recently vacated or newly created positions in the United States alone. According to Glassdoor, the national average salary for a Senior Data Scientist is $137,000. That’s nearly three times the national median income for a salaried employee (as is detailed in a Q4 2018 U.S. Bureau of Labor Statistics report). The benefits of using data science to create actionable efficiencies, cost savings and increased revenue are clear. And given the employment demand and salary expectations, it’s obvious that commercial organizations and government agencies alike are willing to invest in data scientists as employees and data-driven initiatives.
But a fleet of data scientists on the payroll will have a minimal effect on an organization if they are only working with a fraction of the data theoretically available. This dilemma stems from the nature of working exclusively with structured data as opposed to a combination of unstructured data along with structured data sets. Compared to a structured data set (which represents information that exists in relational database applications), unstructured data accounts for over 80% of all data and is growing twice as fast. Unstructured data includes machine-readable text (like the information in emails, Word documents, PDF files and communication, such as text and instant messaging) as well as images, video, satellite and surveillance data.
When running a predictive algorithm on consumer purchase trends for a manufacturing company, a data scientist is likely only working with the structured data generated by online credit card transactions, purchase metadata and information automatically logged and sent to the data warehouse. But what about all of the information tracked outside of that relational database? Perhaps there are product reviews logged on the company website or purchaser complaints sent to customer support via email. There could be a revised agreement with a materials supplier impacting product market pricing.
With an intelligent capture engine interpreting and extracting data from unstructured documents and organizational content, these vital data points could be included, considered and mapped for predictive analysis. Data scientists will have a more comprehensive dataset to analyze and be able to base results upon using all of the data available
A commonly-held industry statistic asserts that 60% of all business processes involve documents. Given that such a large portion of organizational workings relies on document-based processes, it would be an oversight to ignore critical text that could help organizations be more competitive or provide a better customer experience. information. Doesn’t it make sense to capture the valuable data locked away in documents to be incorporated into an organization’s data warehouse? Why pay top dollar for a data scientist to turn rows of data into actionable insights if they’re only accessing a sliver of available information?
To truly realize the potential of any data-driven initiative or IT project, organizations should consider content capture an integral component of their data strategy. Leading tools should be able to ingest and interpret documents regardless of their location – internal or external documents. Based on content categorization, a capture tool should automatically identify and extract key metadata for purposes of data analysis downstream.
In theory, these are straightforward and seemingly simple guidelines. But in practice, content capture is challenging. For example, consider contracts as one type of unstructured document where valuable information is stored. No two contracts are alike. Even starting from the same template – a lease, a rental agreement or a mutual non-disclosure – if changes are made between terms, the contractual parties or addendums added by legal counsel can radically alter the nature of the document.
Collecting data from variable documents requires machine learning, natural language process and, at times, deep learning. Forward-thinking organizations will choose to partner with and buy from capture vendors that are using innovative technology to truly understand and organize unstructured content. Reaching the full potential for Big Data analysis will not only optimize the work for Data Scientists but it will yield benefits throughout the organization.
Are you prepared for more data?