Watch the on-demand webinar

 

Download The Slide Deck

Hello all, my name is Steve Boals and I manage our RPA technology partners here at Ephesoft. In this webinar, I will discuss how you can enhance your digital workforce’s document vision through 4 essential keys. By using the right document intelligence platform, I will show how you can improve robot automation and efficiency.

Before we get started, a quick overview on Ephesoft. If you aren’t familiar with us, we help you create actionable information from unstructured document content chaos. It is through this that we provide structured order, and finally data to enlighten your organization.

Enhancing Robot Vision

It is interesting that one of today’s most advanced process automation technologies, software robots, are broadly using decades-old technology in its simplest form to drive document automation. Smart Capture platforms can go beyond basic OCR, augmenting your RPA efforts, and enhancing robot vision. Before we see the solution, let’s examine the current state and the problem.

Every organization has document pain, either with physical documents, digital documents or most probably both. In the world of RPA, document-centric processes can be a key target. But to process documents, and the unstructured content within, bots need clear, unobstructed vision to analyze document streams, make intelligent decisions based on the data, and request help from humans, if required.

Factors Driving Smart RPA Adoption

The Everest Group put out a great report on RPA, called the “Smart RPA Enterprise Playbook.” By enhancing robot document vision, there can be a multiplier effect when you examine difficult enterprise document workflows where just a minor enhancement or improvement can dramatically impact a wide variety of the factors on this list.

The Process of Documents for Humans

If we look at a typical organization, they receive documents from a wide variety of sources on a daily basis. This typically creates a multi-channel problem, and organizations struggle creating a streamlined flow for physical and digital documents, let alone one from fax, scanners, copiers, mobile, email or other sources. In processing, humans typically have a variety of workflows, both manual and automated to go through inbound documents. Some may augment and automate a piece of this with technology, but the core steps remain:

  • Documents are received, added, separated, sorted and routed
  • Important data is identified
  • Data is extracted and entered
  • And then data and documents may be validated, although this step is missing in quite a bit of organizations, causing additional problems

Hopefully, in the end, both the data and documents land in the correct system.

The Process of Documents for Robots

If we just swap out people with robots without enhancing vision, and relying on legacy OCR, we essentially create digital confusion, as bots have no way to accomplish all these document processing steps and just don’t have the intelligence to handle the flow. Outfitting our robots with dirty glasses can be an inhibitor to maximizing efficiency and achieving peak automation. What are the roadblocks?

OCR Challenges

The majority of the OCR engines on the market are 10-12 year old technologies and are quite good at what they were designed to do: convert images to text. But it is important to understand that the OCR text results are the baseline or foundation for document processing methods. There is no context, no identification of type and no data of interest out of the box. Many products take that text layer and provide basic pattern matching, use templates or zoning to extract data. These technologies are ineffective with semi-structured and unstructured content, can cause high error rates when trying to classify, and require really high levels of human involvement in exceptions processing….added intelligence is just necessary to aid in full automation. With the complex workflows that take RPA implementations to the next level, to avoid digital workforce confusion and high exception rates, we need a smarter platform.

Smart RPA Platform

If you take a peek again at Everest’s playbook, Smart RPA is reliant on the intermingling of a wide variety of augmenting technologies that make the digital workforce smarter, faster, more accurate and less prone to human involvement.

Now we have an understanding of the issue and what we need to get optimum results, what are some keys to improving robot document awareness? I’ll touch on 4 key essential areas that must be part of your clear robot vision strategy:

Go Beyond Legacy OCR and adopt a smart capture platform to enhance our vision.1. Beyond Legacy OCR: The first key is that we just have to go beyond basic, legacy OCR and adopt a smart capture platform to enhance our vision.

Modern smart capture platforms don’t look at documents as pure text but as a series of dimensions. Looking at documents in this manner enhances our accuracy, and leads to desired results. It also allows for advanced methods of extraction that go beyond pure text matching. These enhanced extraction rules allow us to move away from rigid processing templates and provide for variations in documents, and rules that can apply across the entire document. Advanced platforms extend our processing reach to more complicated document types as well.

2. Machine Learning: ML provides simplified setup through the use of sample documents. Upload your different doc types, and the system builds a model for identification/classification. Along with simplified setup for admins, ML also provides a training mechanism for operators (digital and human), where items that are not recognized can be added to aid the next time that type is processed.

Documents are difficult and cannot be treated like any other image. In my experience, there can be document form types that look exactly the same, except for a difference in formatting or text. ML models that work well in identifying a lion from a dog usually don’t pick up subtle dimensional details. Therefore, doing a combination of text analysis and a possible layered approach is required for our goals.

Analyzing beyond the image or text is a strong requirement, and the document intelligence leveraged by the digital workforce needs to focus on document dimensions, beyond plain words.

3. Three Pillars of Document Capture:

  1. The first and most important pillar is classification. Just as a human knowledge worker would examine documents, the system applies its learned model to figure out when one document ends and another begins. It auto-classifies, or identifies the document type and all the pages contained within. This function allows documents to be captured in bulk bundles, such as large PDFs or paper stacks, and does all the heavy lifting in short order that would take a human worker extensive time and effort.
  2. Once the system has identified all the documents in the train of pages it receives, it then rips them into individual sets. This process is similar to the task a human would perform on a stack of documents that need to be split. It is called separation. Separation provides the ability to output individual documents as the end result of the process. When you combine classification and separation in this automated form, using your machine-learned model, you create massive time savings and efficiency, especially when dealing with large multi-document PDFs or large volumes of paper files. This combined process also preps the documents for the next automated step.
  3. Manual data entry is the next human process we can tackle. Through the use of rule sets tied to our classification, we can now auto-extract data of interest from the document. These rules can differ and vary per document, and the technology eliminates the need for a worker to manually enter data, manually name a document, or manually create folders. This data is now available to the digital worker for further processing.

4. Exceptions Processing: The last key is that robots need a “phone a friend” option. But when human intervention is required for an exception, it needs to be a controlled and seamless evolution. To validate and process exceptions, humans and robots need as much information as possible. With Smart Capture, rules can be built to catch documents missing pages, data that is low quality and also data that doesn’t match what the machine expects. This process of validation and exceptions processing can eliminate errant data, format information and ensure high quality. This is the only step that requires human intervention, and these queues can be used to train the system further through an easy-to-use machine learning interface.

The Robot Sandwich

With the right technology, we can create what I call a “Robot Sandwich.” This method essentially has a software robot as the source of documents, feeding the document process, with the ability to engage human knowledge workers for quality assurance, and or exception processing. At the end of the process, is another robot, waiting for the results. Once again, a simple, seamless method and interface to get input.

The “Robot Sandwich”What’s the Solution?

Where can we find those 4 key elements? Ephesoft provides those and more to give your digital workforce clear document vision. To further simplify and in the context of robotic workflow, Ephesoft opens up robot vision to unstructured content in documents, providing classification of document types, separation of bundled PDFs and images, and the extraction of data of interest.

How is Ephesoft different? We provide that intelligence layer that sits on top of the basic foundation of OCR and pattern matching. Think of these features and functions as an upgrade to your robot OS, providing enhanced visions and document processing capabilities.

 

Watch the demo

 

Learn more:

Solution: RPA Document Intelligence

UiPath: Ephesoft Activity Set

Blue Prism: Ephesoft VBO Integration

Contact Us