Getting Started with Varjo SDK

If your project relies on your own engine that is not yet supported by VR-1, you can add support for your engine using the Varjo SDK for Custom EnginesVarjo SDK documentation is intended to help you prepare your project to work on Varjo VR-1. Jump to the corresponding section: 

Going from another VR to VR-1  

Tracking

Migrating from OpenVR

Rendering to VR-1

Step by Step

Error handling 

Threading

Swap Chains

Timing

Events

Properties 

Timewarp 

Eye tracking 

 

GOING FROM ANOTHER VR TO VR-1 

When thinking about adapting your product for the Varjo VR-1, it is important to understand the difference between a “traditional” VR headset with traditional displays and the VR-1 with the Bionic DisplayYou will need to use much higher image resolution in order to use Bionic Display to its full potentialAdditionally, you will need to render two images per eye, instead of single image, since each Bionic Display includes two overlapping displays. Details on how to render images are explained in the section Rendering to VR-1.

 

TRACKING 

Currently, Varjo VR-1 uses SteamVR™ Tracking technology. API used for SteamVR Tracking is called OpenVR. 

OpenVR API can be utilizefor controllers and trackers. You will need to initialize the SteamVR system as a background application to access the controllers in your application. The example implementation of hand controllers tracking can be found in the Benchmark example.  In most cases it’s sufficient to get your existing controller implementation working by just changing the type of your vr::IVRSystem 

from vr::EVRApplicationType::VRApplication_Scene 

to vr::EVRApplicationType::VRApplication_Background.

Just remember that this IVRSystem should be used only for controllers and input, not for rendering or head pose. 

Be careful not to use the OpenVR’s headset position to render the image. This may cause problems with the Varjo compositor and result in a suboptimal VR experience. You should only use controller tracking and controller input portions of the OpenVR API. You should always render the image using the view information provided by the Varjo API. 

 

MIGRATING FROM OPENVR 

OpenVR is an SDK and API developed by Valve. If you have read SDK documentation, you are most likely familiar with how it works, and have already ported your software to work with it. Varjo VR-1 does not use OpenVR for headset tracking. You still must use OpenVR API for controller integration in your engine, but all the headsetrelated functionalities need to be addressed through the Varjo API.  

 

RENDERING TO VR-1 

The Varjo VR-1 headset needs a session to work. With the session you can query information to set up your render targets and buffers. 

For the main loop, Varjo API gives you a sync point that you can use to initiate your render loop. Based on previous frames, the sync point gives you the latest moment you should start rendering to hit the vertical sync of the displays with the freshest data. The purpose of this is to ensure you always render with the latest positional data, keeping the tracking latency as low as possible. High latency can easily cause nausea and break the immersion. 

Rendering is initialized with a fixed number of viewports. By default, the number of viewports is four – two for background displays and two for focus displays. 

For each viewport, Varjo API will give you the information required to render the view. Typically, this means a view and a projection matrix. Use the varjo_GetAlignedView function to get the information as an axis-aligned projection, in case you prefer to get your view information as a frustum. 

Once the viewports have been rendered, you should submit the render targets to the Varjo system. Now the Varjo compositor can draw the images to the headset, applying distortion and color correction as well as time warp to the image. 

Varjo Base has a simulator that you can use to display compositor displays as a desktop window. Refer to the Developing Without VR-1 page for learning how to enable it. Do note that nothing is rendered to the headset if the compositor window is enabled in software version 1.1The Version 1.2 update will allow the possibility to mirror an image from the VR-1. 

It’s advisable to begin development with the compositor window when you start integrating Varjo SDK for Custom Engines to your engine. Once you get the viewport images on the compositor window, you can verify the functionality in the real headset. Using the compositor window can speed up the development as you don’t need to constantly put on and take off the headset. 

 

STEP BY STEP 

This section explains how to show an image on VR-1 using DirectX or OpenGL. Developing DirectX and OpenGL support is very similar, the flow of the development process is described below.  

  1. Initialize the Varjo system with varjo_SessionInit 
  2. Initialize graphics API by including the corresponding Varjo graphics API header and calling its init function, e.g., varjo_D3D11Init 
  3. Set up your render targets using varjo_GraphicsInfo returned by the init function 
  4. Create frame info for per-frame data with varjo_CreateFrameInfo 
  5. Create submit info with varjo_CreateSubmitInfo 
    1. The created submit info will contain the default viewport layout for a single render target 
    2. Fill in your render target texture to the submit info. These don’t need to be changed if your render target doesn’t change 
  6. In your main loop 
    1. Process your frame logic and after that, call varjo_WaitSync. This will fill in the varjo_FrameInfo structure with the latest pose data 
    2. Begin rendering the frame by calling varjo_BeginFrame 
    3. For each viewport 
      1. Render the frame using the viewport information from varjo_FrameInfo 
    4. Submit textures with varjo_EndFrame. This tells Varjo Runtime that it can now draw the submitted frame buffer to the headset 
  7. Free the allocated varjo_FrameInfo and varjo_SubmitInfo structures 
  8. Shut down the session by calling varjo_SessionShutDown 

You can see an example implementation on the Examples page.

 

ERROR HANDLING 

You can query Varjo errors with the varjo_GetError function. The error code refers to the first frame loop that has failed. The following API calls may fail as a cascading result, without overriding the previous error. The varjo_GetError function would also clear all the following errors. Errors are important and must be checked at least once every frame. Errors can be used for informing the user about errors.  

 

THREADING 

Varjo API is thread safe. 

However, graphics APIs put additional constraints on this as, e.g., Direct3D calls will use the immediate context of the provided graphics device and they must not overlap with threads that use the same context. 

 

SWAP CHAINS 

Developers can render textures in two ways: 

  • Using VR-1 swapchain directly, VR-1 uses four swapchain textures, where each texture represents a whole atlas, e.g., it contains all four viewports.
  • Rendering into applicationowned textures and specifying them (together with viewports) when textures are submitted. This way, runtime will copy them to the VR-1 internal swapchain texture. 

In order to use swap chains, you would need to do the following: 

  • Call varjo_*Init and fetch all swap chain textures. 
  • Update the viewport layout. Swap chain uses the default layout, but multitexture needs a custom layout. 
renderer->updateViewportLayout(submitInfo); 
  • Get the current swap chain texture index. 
int32_t swapChainIndex = varjo_GetSwapChainCurrentIndex(session); 
  • Render into the swap chain texture. 
renderer->render(frameInfosubmitInfoswapChainIndexdonutObjects, controllerObjects); 

Full example code can be found in the Benchmark example.

 

TIMING 

Varjo uses nanoseconds as a time unit. Absolute times are relative to an epoch which is constant during execution of the program. Time can be queried using the varjo_GetCurrentTime function. 

To query the time for a frame, the varjo_FrameGetDisplayTime function will return the time of average perceived moment of when the image is shown. 

Because measuring time in nanoseconds yields very large numbers, you should be aware of possible precision issues when casting to other types. 

 

EVENTS 

Varjo API uses events to notify users about changes to the system and user input. 

  • Initialize a varjo_Event structure. It can be allocated in stack or if you want it allocated in heap, you can use the varjo_AllocateEvent helper function. 
  • Poll for the events in your main loop or any other place that gets called frequently by calling varjo_PollEvent in a loop. 

Varjo_events.h contains all available event types. 

 

PROPERTIES 

Properties can be used to query different Varjo system values. 

  • Call varjo_SyncProperties when you want to update the properties. 
  • Use varjo_HasProperty to check whether a specific property value exists. Each property has its own varjo_PropertyKey. 
  • Use varjo_GetProperty* to get the actual property value. 

All available properties are listed in Varjo_types.h. Currently, there are only gaze trackingrelated properties. 

TIMEWARP 

Timewarp on Varjo VR-1 is a way of making sure that the rotation of the headset in VR and in real life is always in sync. The plane of the image is being rotated if the headset rotation is changed regardless of the headset’s image frame rate. This solution allows you to make sure user would never feel nauseated looking around while wearing the VR-1. Varjo’s timewarp only works with rotational movement and would not help reduce stuttering with positional movement or animation. This feature is working “out of the box”, you don’t need to do anything extra to enable it. 

 

20/20 EYE TRACKING 

You can use the full capability of the Varjo VR-1 eye tracking feature in your software. Get more familiar with eye tracking on the Eye tracking page. API endpoints for eye tracking can be found in the API endpoint guide available in Varjo SDK for Custom Engines package under varjo-sdk/docs.