OneSDK for OCR and biometrics

Customers have the option to leverage either the Biometrics component or the OCR component via the OneSDK for IDV in their applications. Additionally, it can be utilized for Fraud Detection using device characteristics.

This section provides more information on how to use the FrankieOne IDV solution through OneSDK with OCR and/or Biometrics components.

Learn more by visiting the following topics:

Capturing document details via OCR

Using OneSDK with OCR and biometrics

Capturing document details via OCR

Use the OCR component provided by OneSDK to extract document details from a document image supplied by the user. The OCR component dynamically loads the OCR provider configured for your account, which can be modified even after your application is live in production.

To implement the OCR component, you implement your own user interface to capture document images, and OneSDK automates the process of communicating with the FrankieOne platform and evaluating the verification results.

Implement an OCR flow by responding to events

A typical OCR flow consists of the following steps:

  1. Capture an image of a physical identity document.
  2. If required, capture a second image. For example, the back side of a driver's license.
  3. Obtain document OCR results, with all data extracted from the images.

Each one of those steps are managed by the OCR component as an event driven interaction with the host UI. You may trigger step 1 by calling the method start in the OCR component and for each step a different event will be emitted, where the host UI can react accordingly. Said events are:

The input_required event

ocr.on('input_required', (info, status, provideFile) => {
  // your code here
});

The input_required event is called when the user needs to supply an image for the document. You can inspect event's arguments to determine how to prompt the user.

Your event listener should be a callback of the form (info, status, provideFile) => void.

ParameterDescription
infoAn object describing the document image to request from the user. The object contains two properties, documentType, which may be one of "PASSPORT" or "DRIVERS_LICENCE", and side, which may be one of "front" or "back".
statusThe status of the OCR flow. See the table below.
provideFileA function that accepts a single argument], file, of type File.

The OCR flow status is described as a string constant, which you can access from the OCRStatuses object.

The OCRStatus type

StatusDescription
OCRStatus.WAITING_OCR_RUNWaiting for OCR to be run on existing scans, most common after interrupted flows
OCRStatus.WAITING_BACKWaiting for the back scan of a document
OCRStatus.WAITING_FRONTWaiting for the front scan of a document
OCRStatus.COMPLETECurrent OCR process was completed
OCRStatus.DOCUMENTS_INVALIDDocument type is invalid or couldn't be inferred. Could happen with bad captures
OCRStatus.DOCUMENTS_UPLOAD_FAILEDProvided scan was rejected by provider. Probably means low quality capture
OCRStatus.PROVIDER_OFFLINEProvider is not available at the moment
OCRStatus.FAILED_FILE_SIZEProvided file is too large
OCRStatus.FAILED_FILE_FORMATProvided file format wasn't accepted

The results event

ocr.on('results', (document) => {
  // your code here
);

The results event is called when OCR is complete and data was extracted from the supplied images.

Your event listener should be a callback of the form (document) => void.

ParameterDescription
documentA Document object. See the table below.

The error event

ocr.on('error', (error) => {
  console.error(error.message)
});

The error event is called when the OCR component encounters a problem it cannot recover from.

Your event listener should be a callback of the form (error) => void.

ParameterDescription
errorAn object with a message property containing a string and a payload property containing an object with more details about the error.

Access the OCR flow status

The following example accesses the status of the OCR flow.

const { getValue } = ocr.access('status');

const status = getValue();

Re-attempt an image capture

The image may be low quality, a bad angle, or not contain the requested document, causing document detection to fail. In this case the the input_required event may be called multiple times with the same info argument so that the user has an opportunity to try and capture the image again.

The following example uses the status argument to determine whether to prompt the user to try again.

ocr.on("input_required", (info, status, provideFile) => {
  
  let message = "";
  if (status === oneSdkOcr.statuses.DOCUMENTS_INVALID) {
    // IF OCR EXTRACT ISNT A VALID DOCUMENT TYPE
    message = `Oops, seems like that wasn't a ${info.documentType}, try again.`;
  } else if (status === oneSdkOcr.statuses.DOCUMENTS_UPLOAD_FAILED) {
    // IF OCR EXTRACT FAILED TO POOR IMAGE QUALITY
    message = `Oops, seems like that wasn't very clear, try again.`;
  } else {
    // WHEN EVERYTHING WORKS FINE
    message = info.documentType 
      ? `Alright, give us the ${info.side} of your ${info.documentType}`
      : "Alright, give us a scan of either your Passport or your Drivers Licence";
  }

  showDialog(message, provideFile);
});

Complete example

Use the component('ocr', options?) method to instantiate the OCR component.

// 1. Obtain the OCR component
const config = {
  dummy: true // Remove this in production
};
const oneSdk = await OneSdk(config);
const ocr = oneSdk.component("ocr");

// 2. Register event listeners

oneSdkOcr.on("input_required", (info, status, provideFile) => {
  dialog(`Please provide the ${info.side} of your ${info.documentType}`, (selectedFile) => {
    provideFile(selectedFile)
  });
});

oneSdkOcr.on("results", ({ document }) => {
  dialog(`Please confirm your information: ${document}`, (okOrNot) => {
    if (okOrNot) gotoNextPage();
    else oneSdkOcr.start() // restart flow
  })
});

oneSdkOcr.on("error", ({ message }) => {
  alert(`There was an issue (${message}). We'll skip OCR for now.`);
  gotoNextPage();
});

// 3. Start the OCR flow

oneSdkOcr.start();

Using OneSDK with OCR and Biometrics

Customers have the options to leverage the Biometrics component or the OCR component via the OneSDK. They can also use it for Fraud Detection via device characteristics.

This section outlines the implementation steps required to integration the OneSDK into a customer's application. As the OneSDK is JavaScript-based, this section will also look at the integration steps into native applications, a request which is becoming more popular with some customers who have native applications and are not looking to change them.

🚧

Using Biometrics with OCR

If you're looking to use only Biometrics, you will still be required to leverage the OCR component of the OneSDK. This is a hard requirement as the OCR component is used to extract the Image on the document which is required for the facial comparison during the biometrics.

Requirements for native integration of biometrics

For native integration onboarding there will be three (3) key steps required. The following table lists their requirements.

High LevelDescription
1InitializationIn order to use the OneSDK it first needs to be initialized. The first step is to create a session from the backend which is then passed to the frontend. In order to create a session an entity needs to be created, the session will be for this new entity
2OCR CaptureOnce the session has been successfully created the OCR component can now be used.
You are responsible for building the screens required to capture the document.

Setting Expectations screen - outline to the customer what is expected and how to take a proper picture (i.e no obstructions, bright well light area, etc)

Document Capture screen - a screen which uses the device camera to capture the document

Review Screen (Optional) - a screen which presents the image back to the user asking if the image is of suitable quality

Once captured the document is passed to the OCR component which will perform the checks and return a result object containing the details extracted from the document
3Biometrics CaptureOnce the OCR component has been completed the user can progress to the Biometrics capture. This component does contain a screen which can be used by the customer and no screens will need to be provided by the customer.

Once started the OneSDK will provide a selfie capture screen which the customer will be required to put their face in the circle and give a smile.

The OneSDK will capture this video and carry out the required checks and comparisons. Once complete a webhook notification can be used to inform the customer the checks have been complete and the results ready to be viewed.

Integration steps for Web implementation

To integrate OneSDK with biometrics on a web implementation of your application, you need to do the following steps.

1. Set Up OneSDK

  • Embed the OneSDK script into you application - Most recent version is v0.7.4, or
<script src="https://assets.frankiefinancial.io/one-sdk/v0.7/oneSdk.umd.js"></script>
  • Via NPM module
npm install @frankieone/one-sdk

//once installed then import into your application  
import OneSdk from '@frankieone/one-sdk'

2. Create an Entity

Before a session can be create a new entity needs to be created in order to retrieve the EntityID of the customer being onboarded.

At this point, no checks will be required to be run so the Create New Entity endpoint should be used

If you have collected information as part of your onboarding journey this can be passed through within the request payload for the above endpoint

3. Create a Session

In order to run the OneSDK a session needs to be created first. This session will be associated with the particular entity being onboarded.

First we will need to encode the credentials for your FrankieOne environment and use this in the Authorization header using machine as the scheme:

ENCODED_CREDENTIAL=$(echo -ne "$YOUR_CUSTOMER_ID:$API_KEY" | base64);

Send a POST request to the /machine-sessions endpoint of our Backend for Frontend server (BFF)

const sessionObject = fetch(`${FRANKIE_BFF_URL}/auth/v2/machine-session`, {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    authorization: "machine " + Buffer.from(`${CUSTOMER_ID}:${CUSTOMER_CHILD_ID}:${API_KEY}`).toString("base64"),
  },
  body: JSON.stringify({
    permissions: {
      preset: "one-sdk",
      entityId: "YOUR_ENTITY_ID",
    },
  }),
}).then((response) => response.json());

Sample Response

{
  "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c"
}

4. Create Configuration Object

Create an object for the global configuration of your SDK integration. See below for the required and recommended parameters:

sessionRequired, unless mode is specified as "dummy" → This is the session object from your call to /machine-sessions

modeOptional → Value is "production" or "dummy". If this is specified as "dummy", then the session does not need to be specified.

config: {
      frankieBackendUrl: "https://backend.latest.frankiefinancial.io",
      successScreen: {
        ctaUrl: "javascript:alert('Callback for successful onboarding')",
      },
      failureScreen: {
        ctaUrl: "javascript:alert('Callback for failed onboarding')",
      },
      documentTypes: ["DRIVERS_LICENCE", "PASSPORT", "NATIONAL_HEALTH_ID"],
      acceptedCountries: ["AUS", "NZL"],
      ageRange: [18, 125],
      organisationName: "My Organisation",
    },
  });

Pass the entire object returned as the response of the /machine-sessions endpoint as the value for the session. Do not extract the token or any other field from the response.

<script>
  {% set sessionObject = await createFrankieSession(); %}

  const sessionObject = JSON.parse("{{ sessionObject | json_encode | raw }}"); // session is the object { token: "...." }
  const oneSdk = await OneSdk({ session: sessionObject });
</script>

5. Initialize OneSDK

Create an instance of the OneSDK using the configuration object you created.

The OneSDK function will return a promise which can be an await

const oneSdk = await OneSdk(configuration);

6. Verify documents details via OCR Components

Once the OneSDK has been initialized correctly without any errors the OCR component can now be called

const ocr = oneSdk.component("ocr");

Listen to the input_required event to present your image capture interface to the user. The handler method that you provide should accept the following arguments:

  • inputInfo - An object that describes the type of document required and any additional specifications for the document
  • status - An object that contains the statuses of any previous calls
  • callback - A callback function that accepts the captured image data as an instance of the File interface
ocr.on("input_required", async (inputInfo, status, callback) => {
    appendToFileInfo("input_required called");
    const { documentType, side } = inputInfo;

    // documentType will initially be null, until the type is inferred from the first provided scan
    if (documentType === "PASSPORT") {
      console.log("input_required : DocumentType is passport");
      // present UI to capture a passport image
    } else if (documentType === "DRIVERS_LICENCE") {
      // check which side of the drivers licence is required
      console.log("input_required : DocumentType is Driving Licence");
      if (side === "front") {
        // present UI to capture the licence's front side
        console.log("input_required : Document Side required is front");
      } else if (side === "back") {
        // present UI to capture the licence's back side
        console.log("input_required : DocumentType Side required is Back");
      }
    } else {
      // present UI to capture any type of identity document
      console.log("input_required : DocumentType is Unknown");
    }
    appendToFileInfo(
      "Submitting file to OCR component. This may take a while now. Please wait"
    );
    callback(selectedFile);
  });

7. Obtain OCR results

Listen to the results event to get notified when the detected documents details are available.

  ocr.on("results", ({ document }) => {
    // Present the details of the document that were detected from the uploaded image or images.
    // Decide whether to proceed to the next stage of the onboarding process
    // depending on whether document verification was successful.
    if (document) {
      console.log(document);
      appendToFileInfo(document.idType);
      appendToFileInfo(document.idNumber);
      appendToFileInfo("Ocr results received");
    } else {
      appendToFileInfo("results with unknown document");
    }
  });

8. Start the Document Capture flow

Start the document capture flow. This will immediately trigger the input_required event.

ocr.start()

9. Start Biometrics Component

The final step of the OneSDK is to start the Biometrics capture. While the OneSDK is said to be headless we do provide the Liveness Capture screen.

Create a DOM container element on your onboarding web page where you want the component to be rendered.

<div id="biometrics-container"></div>

Similar to the OCR component we need to create a biometrics component.

const biometrics = oneSdk.component("biometrics");

10. Obtain results

Listen to the result event to get notified when biometrics results are available.

function startBiometric(oneSdk) {
  const biometrics = oneSdk.component("biometrics");
  biometrics.on("results", ({ checkStatus, processing }) => {
    // Decide whether to proceed to the next stage of the onboarding process
    // depending on whether biometrics verification was successful.
    if (processing) {
      appendToFileInfo(`Biometrics result ${processing} ${checkStatus}`);
      // access the individual object to represent the user being onboarded
      const individual = oneSdk.individual();
      
      // Submit all information, making sure consent has been captured
      individual.addConsent();
      // You may request checks be run with the optional parameter {verify: true}. This method will return a CheckSummary object in this case and will run the entityProfile for the customer
      individual.submit({
        verify: true,
      });
    } else {
      appendToFileInfo(`Biometrics result received with unknown results`);
    }
  });

11. Mount the Biometrics component and await results

Start the Liveness Capture by mounting the component in your container element.

biometrics.mount('#biometrics-container');

Listen for the ready event to know when the biometrics capture interface is ready for use.

biometrics.on("ready", () => {
    // If you provided your own loading state it can now be hidden.
    appendToFileInfo("Biometric Ready");
  });

Integration steps for native application

1. Web Application Development

The OneSDK is a JavaScript based SDK and is not compatible on native applications. For this use case, you are required to create a web application which will consist of a simple HTML page. This page will be styled to the specification your business.

This HTML page will serve as an intermediary layer, initializing and interacting with the OneSDK and its components via an embedded script.

Customers are responsible for creating and managing this web application. FrankieOne can provide example repos to support in customer development.

The creation of the HTML page will follow the same flow as the above web implementation steps.

2.A. Android integration

📘

Assumption

The Web application has been created and OneSDK has been configured correctly (specifically entity creation, session creation, etc.)

In order to use the OneSDK in a native application, a WebView must be implemented in the native application.

webAppInterface = WebAppInterface(requireContext())
        binding.webView.apply {
            webViewClient = MyWebViewClient()
            webChromeClient = MyWebChromeClient()
            addJavascriptInterface(webAppInterface, "Android")
            loadUrl("https://visionary-donut-d00f62.netlify.app/")
        }
CodeDescription
webAppInterface = WebAppInterface(requireContext())Creates an instance of the WebAppInterface class, which is a custom class that likely provides methods and functionality to interact between JavaScript and native Android code.
binding.webView.apply { ... }Accesses the WebView component defined in the layout file using the binding object (assuming data binding is used). The subsequent code block configures the WebView.
webViewClient = MyWebViewClient()Sets a custom WebViewClient to handle various events and behaviours during WebView loading and navigation.
webChromeClient = MyWebChromeClient()Sets a custom WebChromeClient to handle various events and behaviours related to Chrome functionality in the WebView, such as JavaScript alerts or progress tracking.
addJavascriptInterface(webAppInterface, "Android")Adds the webAppInterface object as a JavaScript interface to the WebView, allowing JavaScript code executed within the WebView to call methods defined in the WebAppInterface class.
loadUrl("\<<< INSERT URL TO WEB APPLICATION FROM STEP 1 >>>")It loads the specified URL in the WebView, displaying the corresponding web content.

2.B. iOS Integration

You can use the following example for your iOS application.

private func setupWebView() {
        let configuration = WKWebViewConfiguration()
        webAppInterface = WebAppInterface(delegate: self, viewModel: FrankieOneViewModel())
        configuration.allowsInlineMediaPlayback = true
        configuration.defaultWebpagePreferences.allowsContentJavaScript = true
        configuration.preferences.javaScriptCanOpenWindowsAutomatically = true
        configuration.upgradeKnownHostsToHTTPS = true
        webView.autoresizingMask = [.flexibleHeight]
        webView.navigationDelegate = self
        webView.allowsBackForwardNavigationGestures = true
        webView.customUserAgent = URLGenerator.userAgent
        webView.allowsLinkPreview = true
        view.addSubview(webView)
        
        if let url = URL(string: URLGenerator.webURL) {
            let myURLRequest = URLRequest(url: url)
            webView.load(myURLRequest)
        }
    }

Here are the steps done in the above code sample:

  1. Create a WKWebViewConfiguration instance.
  2. Initialize a WebAppInterface with a delegate and a FrankieOneViewModel.
  3. Set various configuration properties for the WKWebView, such as allowing inline media playback, enabling JavaScript content, allowing automatic opening of windows by JavaScript, upgrading known hosts to HTTPS, etc.
  4. Set the autoresizing mask for the WKWebView to enable flexible height.
  5. Set the navigationDelegate of the WKWebView to self (likely the current view controller).
  6. Enable back-forward navigation gestures for the WKWebView.
  7. Set a custom user agent for the WKWebView using URLGenerator.userAgent.
  8. Enable link preview for the WKWebView.
  9. Add the WKWebView as a subview to the view.
  10. Create a URL from the specified web URL using URLGenerator.webURL.
  11. Create a URLRequest with the created URL.
  12. Load the URLRequest into the WKWebView using the load() method.

3. Relay the results from the Web Application back to the native application

Once the WebViews have been implemented correctly and point to the web application that has also been set up correctly, the final step will be relaying the results from the web application back to the native application and carrying out an specific requirements.

In this use case the OCR results are to be returned to the native application to be pre-populated in a later screen for the customer to review and confirm. This is done by adding additional functions to the WebAppInterface outlined above. By applying the @JavaScriptInterface annotation this indicates that the method is accessible from the Web Application

Example

Web Application - Once OCR component has been return returnOCRResults method is called passing the result object

function returnOCRResults(results) {
  Android.processOCRResults(results)
}

Android - Using a toast method to display the results of the OCR component

@JavascriptInterface
    public void processOCRResult(String result) {
        Toast.makeText(mContext, "OCR result: " + result, Toast.LENGTH_SHORT).show();
    }

Response object

The OCRResult object describes the data that could be extracted from a document image. It consists of fixed and variable properties according to what was extracted.

Fixed properties

PropertyType
ocrDateTimeDateTimeString
statusOCRStatus
mismatchOCRExtractedFields[]
documentType`'PASSPORT''DRIVERS_LICENSE'`

Variable properties

THe OCRRresult object will vary based on what was extracted, an example:

{
  documentTypeInternal: "DRIVERS_LICENCE",  
  dateOfExpiry: "2031-05-28",  
  dateOfIssue: "2021-05-28",  
  documentType: "DRIVERS_LICENCE",  
  documentNumber: "999999999",  
  dateOfBirth: "1990-01-01",  
  issuingCountry: "AUS",  
  state: "VIC",  
  postcode: "3000",  
  town: "Melbourne",  
  street: "80 Collins Street",  
  firstName: "PETER",  
  lastName: "TESTTHIRTEEN",  
}