Onfido Motion
Get the basics on using Onfido Motion with FrankieOne OneSDK
OneSDK works with Onfido Motion to capture document and face details via OCR and Biometrics components.
Integration methods
There are two integration methods that you can use for Onfido with OneSDK:
- IDV flow
- OCR and Biometrics
The following sections provide further details for implementing each integration method.
Integrating OneSDK for IDV flow
To integrate with OneSDK for IDV flow using Onfido, please follow the standard integration of OneSDK for IDV flow.
Recommended devices
Since most desktop computers do not have high resolution built-in cameras, we recommend end users run the IDV flow on mobile devices.
Sample code for IDV flow
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>oneSdk onboarding</title>
<script src="https://assets.frankiefinancial.io/one-sdk/v1/oneSdk.umd.js"></script>
<script>
async function startOneSdk() {
// const oneSdk = await OneSdk({ mode: "dummy" });
const CUSTOMER_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxx-xxxx"
const CUSTOMER_CHILD_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
const API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
const tokenResultRaw = await fetch("https://backend.kycaml.uat.frankiefinancial.io/auth/v2/machine-session", {
method: "POST",
headers: {
"authorization": "machine " + btoa(`${CUSTOMER_ID}:${CUSTOMER_CHILD_ID}:${API_KEY}`),
"Content-Type": "application/json"
},
body: JSON.stringify({
permissions: {
"preset": "one-sdk",
"reference": "<YOUR_UNIQUE_CUSTOMER_REF>"
}
})
});
const tokenResult = await tokenResultRaw.json();
const sessionObjectFromBackend = tokenResult;
const oneSdk = await OneSdk({
session: sessionObjectFromBackend,
//isPreloaded: false,
mode: "development",
recipe: {
ocr: {
provider: {
name: "onfido"
}
},
biometrics: {
provider: {
name: "onfido"
}
},
idv: {
provider: {
name: "onfido"
}
}
}
});
const oneSdkIndividual = oneSdk.individual();
oneSdkIndividual.addConsent("general");
oneSdkIndividual.addConsent("docs");
oneSdkIndividual.addConsent("creditheader");
await oneSdkIndividual.submit();
const idv = oneSdk.flow("idv");
idv.on("results", async ({checkStatus, document, entityId}) => {
if (checkStatus) {
console.log("results succesfful");
console.log(checkStatus);
console.log(document);
console.log(entityId);
} else {
console.log("no data returned");
}
});
idv.on("error", ({ message, payload }) => {
console.log("received error");
console.log(message, payload);
});
idv.on("detection_complete", (message) => {
console.log("capture finished");
console.log(message);
});
idv.mount("#idv-el");
}
</script>
<style>
</style>
</head>
<body style="background-color: white" onload="startOneSdk()">
<div id="idv-el" style="position:fixed;top: 0;left: 0; width: 100%; height: 100%;"></div>
</body>
</html>
Integrating OneSDK for OCR and biometrics
To integrate with Onfido Motion in OneSDK, please follow the standard integration here: OneSDK for OCR and biometrics. Your customer account should be configured to use Onfido as the IDV vendor and is part of the applied recipe.
Default liveness check and fallback options.
Onfido Motion is set as the default liveness check. If the user's device doesn't support Onfido Motion, the user will get an error in the browser.
To allow end users to continue with the process, we can configure Onfido Motion in OneSDK to fall back to either liveness video or selfie when the user's device doesn't support Onfido Motion.
OCR Headed and headless
You can use the OCR component with a headed or headless integration. Visit Handling image capture for more details.
Synchronous or asynchronous
Onfido IDV flow can be either synchronous or asynchronous. When it's synchronous, the COMPLETE
event will be returned after a successful IDV flow on OneSDK.
When it's an asynchronous, the CHECK_PROCESSING
event will be returned after a successful IDV flow. This indicates that OneSDK is done running the IDV flow. A webhook needs to be configured for clients to receive the results of IDV.
How to use it in your onboarding flow
There are two main use cases you can use the above integration methods in your onboarding flow:
- Onfido - Biometrics first
- Onfido with an existing entity
These two use cases are discussed in the following section.
Onfido - Biometrics first
The Biometrics-first approach using Onfido doesn't require an existing entity beforehand. In this use case, you can start the flow with OCR/Biometrics first to pre-fill the user details. You don't need to have an entity, instead, the FrankieOne service creates a new entity and fills it with the OCR-extracted results.
A security note on the machine token
To initialise OneSDK, you need to generate a machine token on your backend for security reasons, and then pass it through the client-side application. You should not generate the machine token in your front-end application.
In the following section, the machine token is generated in the example, for simplicity.
Sample HTML to run Onfido with Biometrics first
Below is a sample HTML code for running Onfido with Biometrics first.
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" charset="UTF-8"/>
<title>oneSdk onboarding</title>
<script src="https://assets.frankiefinancial.io/one-sdk/v1/oneSdk.umd.js"></script>
<script>
async function startOneSdk() {
const CUSTOMER_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
const API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
const ENTITY_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
const tokenResultRaw = await fetch('https://backend.latest.frankiefinancial.io/auth/v2/machine-session', {
method: 'POST',
headers: {
authorization: 'machine ' + btoa(`${CUSTOMER_ID}:${API_KEY}`),
'Content-Type': 'application/json',
},
body: JSON.stringify({
permissions: {
preset: 'one-sdk',
reference: "your_unique_customer_refrence"
},
}),
});
const sessionObjectFromBackend = await tokenResultRaw.json();
const oneSdk = await OneSdk({ session: sessionObjectFromBackend, telemetry: false });
oneSdk.on('*', console.log);
const oneSdkIndividual = oneSdk.individual();
oneSdkIndividual.on('*', console.log);
// Display and update name
const name = oneSdkIndividual.access('name');
const { givenName, familyName } = name.getValue();
const newGivenName = prompt('Confirm given name', givenName);
name.setValue({ givenName: newGivenName, familyName });
oneSdkIndividual.addConsent("general");
oneSdkIndividual.addConsent("docs");
oneSdkIndividual.addConsent("creditheader");
// Submit changes
oneSdkIndividual.submit();
// Setup OCR and Biometrics
const mountElement = document.getElementById('biometrics-container');
const ocr = oneSdk.component('ocr');
const biometrics = oneSdk.component('biometrics');
// On input required, try running the OCR component again
ocr.on('input_required', () => ocr.mount(ocr, mountElement));
ocr.on('error', console.error);
biometrics.on('error', console.error);
let error = false;
biometrics.on('detection_failed', () => (error = true));
biometrics.on('session_closed', () => {
// If the session was closed due to an error, try running the biometrics component again.
if (error) biometrics.mount('#biometrics-container');
error = false;
});
ocr.on('results', ({ document }) => {
// Present the details of the document that were detected from the uploaded image or images.
// Decide whether to proceed to the next stage of the onboarding process
// depending on whether document verification was successful.
if (document) {
console.log(document);
console.log(document.ocrResult.dateOfBirth);
biometrics.mount('#biometrics-container');
} else {
console.log('No document returned');
}
});
biometrics.on('processing', () => alert('We will get back to you with results soon'));
biometrics.on('results', (result) => {
// Decide whether to proceed to the next stage of the onboarding process
// depending on whether biometrics verification was successful.
console.log(result);
});
ocr.mount("#biometrics-container");
}
</script>
</head>
<body style="background-color: blue" onload="startOneSdk()">
<div id="biometrics-container"></div>
</body>
</html>
Onfido with an existing entity
In this use case, you can capture some information manually from the user to create an entity, then run OCR/Biometrics on the entity. To use OneSDK with an existing entity, you can use the same sample code but instead of using customer reference, pass an existing entityID to create a session:
const tokenResultRaw = await fetch('https://backend.latest.frankiefinancial.io/auth/v1/machine-session', {
method: 'POST',
headers: {
authorization: 'machine ' + btoa(`${CUSTOMER_ID}:${API_KEY}`),
'Content-Type': 'application/json',
},
body: JSON.stringify({
permissions: {
preset: 'one-sdk',
entityId: ENTITY_ID
},
}),
});
OCR Mapping recommendations
After performing an OCR on a document you'll have access to the object below as part of the OCRresult event.
{
"dateOfBirth": null,
"idSubType": null,
"documentId": "acfb32dd-2ae8-9bc6-9546-2f2c870a08eb",
"region": "NSW",
"country": "AUS",
"idNumber": "21110000",
"gender": null,
"extraData": {
"ocrApplicantId": "d9e7c18c-e457-444c-9037-3b7811b6da26",
"ocr_process_status": "COMPLETE_OCR",
"ocr_process_run_date": "2024-07-03T02:06:26Z",
"idv_webcapture_doc_type": "id",
"idv_check_id": "d9e7c18c-e457-444c-9037-3b7811b6da26",
"idv_webcapture_check_id": null,
"ocr_scanned_mismatch": "ocr_scanned_first_name,ocr_scanned_last_name",
"idv_reports_count": "1",
"ocr_scanned_first_name": "GIDEON",
"ocr_scanned_last_name": "NAVIGATOR",
"ocr_scanned_full_name": "GIDEON NAVIGATOR",
"ocr_scanned_issuing_country": "AUS",
"ocr_scanned_issuing_state": "NSW",
"ocr_scanned_date_of_birth": "2002-12-08",
"ocr_scanned_document_number": "21110000",
"ocr_scanned_document_type": "driving_licence",
"ocr_scanned_document_type_internal": "DRIVERS_LICENCE",
"ocr_scanned_date_of_issue": null,
"ocr_scanned_date_of_expiry": "2028-10-11",
"ocr_scanned_id_number": "2099966770",
"ocr_scanned_address_1": "UNIT 22",
"ocr_scanned_address_2": "128 RIVER RD",
"ocr_scanned_address_3": "SYDNEY NSW 2122",
"ocr_scanned_address_4": null,
"ocr_scanned_address_5": null,
"ocr_scanned_gender": null,
"kyc_document_category": "primary_photographic",
"document_number": "2099966770"
},
"validation": {
"manual": {
"isValid": null
},
"electronic": {
"validationReport": null,
"isValid": null
}
},
"idType": "DRIVERS_LICENCE",
"idExpiry": "2028-10-11",
"ocrResult": {
"status": "COMPLETE_OCR",
"runDate": "2024-07-03T02:06:26Z",
"mismatch": [
"firstName",
"lastName"
],
"firstName": "GIDEON",
"lastName": "NAVIGATOR",
"fullName": "GIDEON NAVIGATOR",
"issuingCountry": "AUS",
"issuingState": "NSW",
"dateOfBirth": "2002-12-08",
"documentNumber": "21110000",
"documentType": "driving_licence",
"documentTypeInternal": "DRIVERS_LICENCE",
"dateOfIssue": "",
"dateOfExpiry": "2028-10-11",
"idNumber": "2099966770",
"address1": "UNIT 22",
"address2": "128 RIVER RD",
"address3": "SYDNEY NSW 2122",
"address4": "",
"address5": "",
"gender": ""
}
}
Below is our recommendation for the mapping you can do in a review screen. Please note that by default the required fields on the document object to run a verification will already be filled in and should only be updated if a customer advises that there's a discrepancy
Customer Detail | Key Name from OCR | Description | Example |
---|---|---|---|
First Name | ocrResult.firstName | First name as per the OCR result | GIDEON |
Last Name | ocrResult.lastName | Last name as detected via OCR | NAVIGATOR |
dob | ocrResult.dateOfBirth | the DOB on the document | 2002-12-08 |
document type | ocrResult.documentTypeInternal | This is the FrankieOne document type | DRIVERS_LICENCE |
issuing country | ocrResult.issuingCountry | This is the country of the document | AUS |
Region | ocrResult.issuingState | If it’s aDL this will be the state of the document | NSW |
idNumber | ocrResult.documentNumber | This is the id Number of the document for AUS DLs this is the number that doesn’t change when an ID is reissued | 21110000 |
Card Number | ocrResult.idNumber | This is the card number of the document that has been parsed. We recommend stripping spaces if you find any and also perform Front end validation before asking the customer to submit as OCR may not capture it correctly for documents where the placement is too small | 2099966770 |
Unit Number | ocrResult.address1 | This could be the street number, name and Type, or the unit number. It's always best to check how many address attributes have been parsed to determine the order in which they are to be presented. | UNIT 22 |
Street Name | ocrResult.address2 | This is a combination of street number, name, and type. You can perform analysis to split this out however FrankieOne will be able to handle this address if passed in in this format | 128 RIVER RD |
Suburb State Postcode | ocrResult.address3 | This is a set of strings that contain the state suburb and postcode. Our recommendation is to break this down by looking for the postcode State and the suburb in the resulting string. | SYDNEY NSW 2122 |
Accepted document types when using Onfido
You can set specific documents for corresponding countries. To configure these on the recipe
level, follow the code example below:
ocr: {
provider: {
name: 'onfido',
},
documents: [
{
type: "DRIVERS_LICENCE",
countries: ["AUS"]
},
],
},
Document Types
Document types available are:
“PASSPORT”
“DRIVERS_LICENCE”
“NATIONAL_ID”
“NATIONAL_HEALTH_ID”
Countries format
The country format to use is the 3-alpha-code, such as “USA
”, “NZL
”, “AUS
”. To include all countries available, use “ALL
”
Additional document configuration
To include more than one (1) document configuration, follow the example below:
documents: [
{
type: "DRIVERS_LICENCE",
countries: ["AUS"]
},
{
type: "PASSPORT",
countries: ["AUS"]
},
],
To configure on run time, follow the example below:
const oneSdk = await OneSdk({
session: sessionObjectFromBackend,
recipe: {
ocr: {
maxDocumentCount: 2,
}
}
});
const ocrComponent = oneSdk.component('ocr', {
documents: [
// same format as above
]
});
Notes on country and document selection
- The country selection screen will not be shown if only one (1) country is selected or if only passports are allowed.
- The document selection screen will not be shown if only one (1) document type is specified.
CSS Customisation
You can configure CSS customisation during initialisation, under recipe object. To do so, follow the example below:
const oneSdk = await OneSdk({
session: sessionObjectFromBackend,
recipe: {
biometrics: {
provider: {
name: 'onfido',
customUI: {
borderRadiusButton: "10px";
}
},
},
ocr: {
maxDocumentCount: 2,
provider: {
name: 'onfido',
customUI: {
fontWeightBody: 600,
fontSizeBody: "10px",
fontSizeSubtitle: "10px",
fontSizeTitle: "10px",
},
},
}
}
});
You can refer to the following code example for a list of customisable options:
OnfidoCustomUIProps = {
borderRadiusButton?: string;
borderStyleSurfaceModal?: string;
fontWeightBody?: number;
fontSizeBody?: string;
fontSizeSubtitle?: string;
fontSizeTitle?: string;
colorBackgroundSurfaceModal?: string;
colorBackgroundIcon?: string;
colorBorderLinkUnderline?: string;
colorBackgroundLinkActive?: string;
colorBackgroundLinkHover?: string;
colorBackgroundDocTypeButton?: string;
colorContentBody?: string;
colorContentTitle?: string;
colorContentSubtitle?: string;
colorContentButtonPrimaryText?: string;
colorBackgroundButtonPrimary?: string;
colorBackgroundButtonPrimaryHover?: string;
colorBackgroundButtonPrimaryActive?: string;
colorContentButtonSecondaryText?: string;
colorBackgroundButtonSecondary?: string;
colorBackgroundButtonSecondaryHover?: string;
colorBackgroundButtonSecondaryActive?: string;
colorContentLinkTextHover?: string;
colorBorderDocTypeButton?: string;
colorBackgroundDocTypeButtonHover?: string;
colorBackgroundDocTypeButtonActive?: string;
colorContentDocTypeButton?: string;
colorIcon?: string;
colorContentInfoPill?: string;
colorBackgroundInfoPill?: string;
colorBackgroundSelector?: string;
colorInputOutline?: string;
colorContentButtonTertiaryText?: string;
colorContentInput?: string;
colorBackgroundInput?: string;
colorBorderInput?: string;
};
Updated 2 months ago