Immerse

Custom avatars

The Immerse SDK provides a way to replace the default user avatars with one or more custom avatars. These avatars support all of the same functionality as the default ones, but can look completely different and can also be extended to add extra functionality to Immerse scenes.

When to use:

  • avatars are required that look different to the default ones
  • a choice of different avatars need to be featured in a project
  • the default avatar needs to be extended with additional behaviours

How it works

Avatars in the Immerse SDK are made up of two separate layers - the logic and the view. The logic layer is internal to the SDK and automatically takes care of tracking the VR hardware position, rotation and input. The view layer is the public part that can be customised and is responsible for all aspects of how the avatar looks on screen; this is commonly referred to as the Avatar Renderer.

Each Avatar Renderer is stored as a prefab that contains the head and hand models used to represent the avatar on screen. This prefab also has a number of components attached, which inherit from the following abstract base classes: AvatarRenderer, AvatarRendererHead and AvatarRendererHand.

By default, the Immerse SDK will always instantiate a default avatar renderer for each VR user in the session. But one or more alternative avatar renderers can be provided, which can be used instead, by creating additional prefabs and configuring the SDK to use them. It is even possible to switch between different avatar renderers at runtime if required.

Full details on how to do this are covered in the following sections of this guide.

Examples

An example of how to setup and use custom avatars is provided in the Immerse SDK Examples project. It shows how to add multiple custom avatars to a project and switch between them at runtime.

Location of the custom avatar example in the examples project.

The example scene features 4 buttons that can be used to change a user's own avatar, between three different custom ones and a default SDK avatar.

The custom avatars example scene, showing buttons that can be used to change avatars at runtime.

Custom avatar prefabs used in the example project can be found in: Assets > Examples > Objects > CustomAvatar

The example custom avatar prefabs.

Also look at the default avatar renderer, included with the Immerse SDK. While the custom avatar demonstrates a basic example, the default avatar provides a more detailed reference guide, showing how a fully featured avatar should be set up.

The location of the default avatar renderer prefab in the Immerse SDK.

The contents of the default avatar renderer prefab.

📘

The scripts used for the default avatar and the custom avatar example are all saved externally to the core Immerse SDK DLLs, this gives developers full access to code which can be read, copied and modified for their own projects.

How to implement

It's usually best to start creating a new custom avatar renderer by setting up its general structure:

  • Create 3 new empty prefabs and name them CustomAvatar, CustomHead and CustomHand.
  • Create 3 new MonoBehaviours and name them CustomAvatar, CustomHead and CustomHand.
  • Open the CustomAvatar script and ensure it inherits from the abstract AvatarRenderer base class.
  • Open the CustomHead script and ensure it inherits from the abstract AvatarRendererHead base class.
  • Open the CustomHand script and ensure it inherits from the abstract AvatarRendererHand base class.
  • Ensure that all three of these MonoBehaviours implement any required abstract methods from the base classes. Don't worry about filling out the implementation yet, as we will return to do that in the sections below.
  • Attach each MonoBehaviour component to its associated prefab.
  • Open the CustomAvatar prefab and drag in an instance of CustomHead and 2 instances of CustomHand.
  • Rename the hand instances to LeftHand and RightHand.
  • Drag the head and hand instances into the corresponding fields on the CustomAvatar component inspector.
  • Set the X scale of RightHand to -1.

📘

If the process of inverting the X scale on a hand prefab causes problems later on, create two separate hand prefabs in the project instead

Once the structure is set up, move onto setting up the head and hands, which is covered in the next two sections.

Implementing the avatar's head

The simplest part of a custom avatar to implement is the head. As shown in the image below, the avatar head simply requires a mesh renderer to be displayed in the scene. The collider is entirely optional, but can be useful for ensuring that objects collide realistically with the avatar.

The structure of the default avatar head.

Opening the DefaultAvatarHeadRenderer script will show that it extends the AvatarHeadRenderer base class. Extending the base class requires Show and Hide methods to be implemented, which are used to toggle the head's visibility in WebGL mode, when the camera switches between a third and first person view.

In the default implementation we simply change the head's visibility by setting the mesh renderer's shadow casting mode, and in most cases that is all that needs to be done in the CustomHead class.

Implementing the avatar's hands

Avatar hands are a bit more complicated to implement. Taking a look at the DefaultHand prefab will show the main components, as detailed in the image below:

The default avatar hand renderer.

There are a lot of references assigned to the DefaultAvatarHandRenderer component inspector, most of them are required by the AvatarRendererHand base class:.

  • Main Renderer is the main mesh renderer of the hand - it is used to toggle hand visibility.
  • Hand Collider is the main collider used to detect when a hand is near an interactive object.
  • Finger Collider is used to enable the hand to press buttons, sliders and other free moving objects in the scene.
  • Menu Pointer Origin is an empty transform that defines the origin and direction of the finger laser beam, used to interact with menus and other UI.
  • Menu Check Origin is an empty transform that defines the origin and direction of a raycast used to check whether the hand should enter the automatic menu pointing state.
  • Attach Point is an empty transform that defines the position that engageable objects will be attached to when engaged by this hand.
  • Watch Attach Point is an empty transform that defines where on the hand the watch should be attached.
  • Watch Menu Attach Point is an empty transform that defines where the watch menu UI should be attached when the avatar looks at their watch.
  • Palm Menu Attach Point is an empty transform that defines where the default hand menu UI should be attached when the avatar turns their hand face up.

📘

The avatar's watch

The avatar's watch is an optional feature that is only enabled if HandMenuType is set to WatchMenu on the App object in the scene.

If the watch is to be used, it's good to provide a watch strap, as only the face will be rendered by the SDK.

If the watch is not required, then some empty transforms will need to be provided to the inspector, as the avatar renderer will not validate unless all the references are not null.

If the DefaultAvatarHandRenderer script is opened, it extends the AvatarHandRenderer base class. Extending the base class requires that the following methods are implemented:

  • Show() is used to show the avatar's hand.
  • Hide() is used to hide the avatar's hand. This is usually called when the avatar is holding something, or the VR hardware controllers are being displayed.
  • HandleGripChanged(float amount) is used to provide visual feedback on the amount that the controller grip button has been pressed.
  • HandlePoseChanged(Pose pose) is used to change the hand's pose to match the SDK's set of required poses, which currently include, rest, manual pointing and menu pointing.

Developers are free to implement these methods in any way required. The default avatar and custom avatar example prefabs demonstrate two very different implementation approaches, so it is worth looking at both to get a good understanding of how flexible the system can be.

Implementing the avatar renderer

Once the head and hand prefabs have been implemented, they will need to be brought together in a CustomAvatar prefab, as shown below:

The example project's custom avatar prefab.

Once the head and hand references have been assigned, the implementation for some methods from the AvatarRenderer base class (in the CustomAvatar script) will need to be filled out. Both of these methods are based around the concept of a Visual ID, which is a unique sequential index identifier that is automatically assigned to each avatar in the session.

SetVisualId(byte visualId)
This method should be implemented to change the appearance of an avatar, based on the Visual ID that it has been assigned by the SDK. This is most commonly done by changing the avatar's colour, however the system is flexible enough to support any kind of appearance change; for example a different hat could be associated with each Visual ID instead.

GetColorForVisualId(byte visualId)
This method must be implemented so that the SDK can associate a colour with each avatar in the scene. This colour will be used in components that are auto-generated by the SDK at runtime, such as name tooltips, interaction outlines and WebGL camera target buttons. The implementation of this method must return a Color32 struct for each Visual ID passed in. Colours can be reused, but it is recommended that enough are provided to cater for the expected number of avatars in the scene.

📘

Look at the DefaultAvatarRenderer and CustomAvatarRenderer classes for examples of how to implement these methods. But remember that Visual ID doesn't just have to relate to an avatar's colour - any attribute of an avatar can be associated with a Visual ID.

Configuring the project to use custom avatars

The last step is to configure the Immerse SDK to use the new custom avatar. To do this, open the Immerse Project Settings window and navigate to the Avatar section. Then drag the custom avatar prefab into the Custom Avatar Renderers list under the Custom Prefabs header.

The Immerse Project Settings window menu option.

If the Avatar Renderers list contains one or more custom avatars, then the first one in the list will be automatically used for all avatars in the scene. If the list is empty then the Immerse SDK's default avatar will be used instead. After initialisation, the avatar renderer can be changed at run time by using the code API detailed in the next section.

The Immerse Project Settings window's Avatar tab, showing 3 custom avatar prefabs added to the Avatar Renderers list.

📘

Updating your SDK (v3.8.0)

When updating from an earlier version to v3.8.0 or newer, this setting will have been lost and will require setting up again.

Code API

The Immerse SDK's AvatarService class provides methods to change the avatar renderer at runtime. Calls to these methods will be synchronised to all other users in the session. A demonstration of how to use these methods can be found in the ExampleCustomAvatar class in the Examples project.

AvatarService.AttachCustomAvatarRenderer(Avatar avatar, int index);
Use this method to make the given avatar display the custom renderer with the corresponding index from the AvatarOptions configuration page.

AvatarService.AttachDefaultAvatarRenderer(Avatar avatar);
Use this method to make the given avatar display the default renderer that is included with the Immerse SDK.

Unity iconTry out this component in the Examples project

Examples (menu) > Features > Load Custom Avatar Example

Learn more about the Examples project

Updated 6 months ago


Custom avatars


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.