logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

Animated Gaussian Splatting in Unreal Engine 5

The research team at Temporal Games has demonstrated their recent progress in making Gaussian Splatting a viable technology for real-time volumetric video streaming.

The Temporal research team has recently shared a series of insightful demos, showcasing the integration of volumetric video content achieved with Gaussian Splatting in Unreal Engine 5. For those unfamiliar, Animated Gaussian Splatting (also known as 4D Gaussian Splatting, 4DGS) is a relatively new volume rendering technique poised to become the foundation for a new medium that can bridge real and virtual experiences, with the possibility of broadcasting new volumetric content daily in lifelike 3D.

4DGS will soon enable both professionals and individual creators to capture any event or performance, from a DJ in a club to a ballet dancer in a theater, and present the final result as a three-dimensional projection of reality – on the web, in mixed reality, spatial computing formats, and in games.

The team has chosen Unreal Engine 5 to develop a plug-in for real-time online streaming of 4DGS content, ensuring efficient playback, decompression, and proper interaction of the volumetric content with dynamic lighting in any 3D scene.

The challenge of utilizing 4DGS lies in the vast amount of data required to represent the smooth animation of Gaussian splats: a single frame containing 200K Gaussians reaches 42 MB in its raw form, and even after applying established compression methods, it is reduced to only 1.62 MB, leading to a total of 0.34 TB for an hour-long recording.

Researchers at Temporal have devised a method that transmits up to 40,000 Gaussians at a bandwidth of 8 megabits per second, maintaining high visual fidelity even in resource-limited environments. When the bandwidth is increased to 50 megabits per second, the technology can stream about 500,000 Gaussians, making it scalable and effective for large scenes without any timing limitations.

The newly developed Gaussian generation technique achieves higher compression levels and smooth motion of objects from frame to frame while allowing the number of Gaussians to be scaled for the streaming to adapt to various network conditions.

The results demonstrated in the video have been achieved from scratch, except for the rasterizer and using the CMU Panoptic dataset. The GS implementation method utilizes RGB-D-based reconstruction and a custom loss function to improve inter-frame coherence with novel techniques for masking and depth-aware reconstruction. As a result, hardware requirements for 4DGS playback remain at the level of static GS.

The animation pipeline focuses on extracting the animated subject from the video material, such as a person, and separating their keyframes and interpolation processing. Compression is then achieved by reducing the data size for parameters such as position, color, transparency, size, and angle (represented by quaternions), using lossy quantization techniques.

Initially, Google's Draco library was used for compressing keyframes, which demonstrated high efficiency. However, the team later shifted to a delta-graph algorithm, which improved the test scene results by 1.5 times. This algorithm assumes that nearby particles form surfaces and have similar parameters, allowing for the construction of graphs with changes between particles stored as compact deltas. To enhance compression quality, several parameter combinations are tested, and the final data are compressed using the Brotli library, reducing the original data size by 15%.

Rotations and their animations pose a challenge due to the spherical nature of calculations. While the algorithm has the potential for further optimization, significant improvements may be limited due to high accuracy requirements for certain parameters.

The team plans to present a complete client-server solution for 4DGS streaming and aims to make the technology available for games and web platforms. You can learn more about Temporal and their projects by visiting the team's official website, Twitter page, and YouTube channel.

Don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more