BrandKwikID Documentation
API SuiteFace & Biometrics (ML)

Liveness Detection V2

Same liveness flows as v1 with a structured multi-block JSON response (image, video, facematch, overall). POST /v2/liveness with Bearer auth.

API reference

Try itLoading playground…
Loading…
AuthorizationBearer <token>

JWT Bearer token authentication. Obtain a token from the KwikID dashboard.

In: header

imagestring
video?string
enhanced_detection?string

Optional. When image and video are both present: True (default) uses multi-frame analysis; False uses legacy image+blink path.

facematch_frames?string

Optional. When image+video: number of video frames to compare to the still (default 3).

facematch_threshold?string

Optional. When image+video: fraction of frames that must match for a positive facematch (default 0.6).

frame_sampling_rate?string

Optional. When enhanced (image+video): sample every Nth frame (default 15).

max_frames_to_analyze?string

Optional. When enhanced: cap on frames analyzed (default 20).

image_type?string

Optional. Use file when uploading a file (default). Other values apply only when your integration sends non-file image payloads.

unique_id?string

Optional. Correlation id for logs.

image?string
videostring
enhanced_detection?string

Optional. When image and video are both present: True (default) uses multi-frame analysis; False uses legacy image+blink path.

facematch_frames?string

Optional. When image+video: number of video frames to compare to the still (default 3).

facematch_threshold?string

Optional. When image+video: fraction of frames that must match for a positive facematch (default 0.6).

frame_sampling_rate?string

Optional. When enhanced (image+video): sample every Nth frame (default 15).

max_frames_to_analyze?string

Optional. When enhanced: cap on frames analyzed (default 20).

image_type?string

Optional. Use file when uploading a file (default). Other values apply only when your integration sends non-file image payloads.

unique_id?string

Optional. Correlation id for logs.

imagestring
videostring
enhanced_detection?string

Optional. When image and video are both present: True (default) uses multi-frame analysis; False uses legacy image+blink path.

facematch_frames?string

Optional. When image+video: number of video frames to compare to the still (default 3).

facematch_threshold?string

Optional. When image+video: fraction of frames that must match for a positive facematch (default 0.6).

frame_sampling_rate?string

Optional. When enhanced (image+video): sample every Nth frame (default 15).

max_frames_to_analyze?string

Optional. When enhanced: cap on frames analyzed (default 20).

image_type?string

Optional. Use file when uploading a file (default). Other values apply only when your integration sends non-file image payloads.

unique_id?string

Optional. Correlation id for logs.

Response Body

curl -X POST "https://__mock__/v2/liveness"

{
  "image": {
    "input_provided": "True",
    "result": "real",
    "confidence": 95,
    "prediction_confidence": 95
  },
  "video": {
    "input_provided": "True",
    "result": "real",
    "confidence": 90.45,
    "prediction_confidence": 90.45,
    "blink_detection": "True",
    "video_frames_analyzed": 20,
    "real_video_frames": 18,
    "total_video_frames": 300,
    "enhanced_detection": "True",
    "frame_sampling_rate": 15,
    "max_frames_analyzed": 20
  },
  "image_video_facematch": {
    "performed": "True",
    "result": "True",
    "confidence": 88.5,
    "frames_tested": 3
  },
  "overall_result": "True",
  "overall_confidence": 92.94
}
{
  "detail": {
    "<location>": {
      "<field_name>": [
        "string"
      ]
    }
  },
  "message": "string",
  "msg": "string"
}
{
  "detail": {},
  "message": "string"
}

Overview

Call POST /v2/liveness with Authorization: Bearer <token> and multipart/form-data. Input modes match Liveness Detection (v1): still + MP4 video (enhanced multi-frame path with face match across frames), still only, or video only. Published OpenAPI (LivenessMultipartRequest) uses oneOf so primary inputs (image, video, or both) are separate from optional tuning strings (frame_sampling_rate, max_frames_to_analyze, facematch_frames, facematch_threshold, enhanced_detection, image_type, unique_id). Tuning behavior matches v1.

Flow at a glance

Illustrative only: shows how requests move through outcomes, without describing internal models or proprietary logic.

What changes in V2: the 200 body is no longer a single flat object. It returns:

  • image: whether an image was supplied, liveness result, confidence, and optional prediction_confidence (see below).
  • video: same pattern for the video path, plus frame counts, blink, and enhanced-mode metadata when applicable.
  • image_video_facematch: whether facematch ran, result, confidence (percentage 0 to 100 when performed), frames_tested.
  • overall_result and overall_confidence: combined decision; confidence is 0 when the overall outcome is a failure (for example False or fake).

confidence vs prediction_confidence

For image and video blocks, when the modality is classified as fake, confidence is returned as 0 and the score before that rule is available in prediction_confidence when the service has a numeric score. This keeps a clear failed/passed signal while preserving the underlying score for auditing or UI.

200 response: keys, definitions, examples

Top-level keys are objects image, video, image_video_facematch, plus scalars overall_result and overall_confidence. Examples below are typical for a successful image + video run; strings like "True" / "False" are returned as strings, not JSON booleans. Fields often become null or "N/A" when that part of the pipeline did not run.

Response keyDefinitionExample value
image.input_providedWhether a still image was sent."True"
image.resultImage liveness label: real, fake, or N/A if no image."real"
image.confidenceImage modality score surfaced to clients; 0 when image.result is fake.95.0
image.prediction_confidenceRaw image score before spoof zeroing; null if not computed.95.0 or null
video.input_providedWhether a video was sent."True"
video.resultVideo liveness label; real, fake, or N/A if no video."real"
video.confidenceVideo modality score; 0 when video.result is fake.90.45
video.prediction_confidenceRaw video score before spoof zeroing; null if not applicable.90.45 or null
video.blink_detectionBlink helper output ("True", "False", or "N/A")."True"
video.video_frames_analyzedFrames used for video liveness (when applicable).20
video.real_video_framesFrames classified real in that sampled set.18
video.total_video_framesTotal frames in clip when known; otherwise null.300
video.enhanced_detectionEnhanced path used ("True" / "False") or "N/A"."True"
video.frame_sampling_rateSample every Nth frame in enhanced image+video mode; else null.15
video.max_frames_analyzedMax frames cap in enhanced image+video mode; else null.20
image_video_facematch.performedWhether still was compared to sampled video frames."True"
image_video_facematch.resultFacematch outcome: "True", "False", or "N/A"."True"
image_video_facematch.confidenceMatch strength as percentage 0 to 100 when performed; else null.88.5
image_video_facematch.frames_testedNumber of video frames compared to the still; else null.3
overall_resultFinal decision (e.g. "True" / "False" with facematch, or "real" / "fake" on single-modality paths)."True"
overall_confidenceFinal confidence on a 0 to 100 scale in typical success paths; 0 when the overall outcome is a failure.92.94

Key features

  • Structured responses: Easier to store, display, or policy-check each part of the check without parsing a flat mix of fields.
  • Same integration surface as v1 for multipart inputs and tuning parameters.
  • Pairs with Facematch: Conceptually aligned with Facematch; V2 embeds still-vs-video facematch in the image_video_facematch block when you send both image and video.

Bulk testing in the Playground

The Playground on this page supports Bulk (ZIP batch) mode. Screenshots, naming rules, Run, completion download, and output archive layout are documented on Facematch:

Switch this page’s Playground to Bulk and follow the same flow; shape your ZIP inputs to match POST /v2/liveness (see OpenAPI on this page).

Implementation

Step 1: Call from your backend

Send image and/or video as for v1. Point your client at the v2 route on your ML base URL (for deployments that expose SUMI under an app prefix, this is commonly .../sumi/v2/liveness).

Error handling

HTTP statusWhen
400Missing or empty media, wrong format, or pipeline error (body often includes msg).
401Invalid token.

Benefits

  • Clear separation of image liveness, video liveness, and optional facematch in one response.